In BriefOn November 7, China announced plans to build an unmanned, AI-powered police station in one of its capital cities. The move is in keeping with China's goal of becoming a world leader in AI by 2030.
On November 7, China announced plans to open an unmanned police station powered by artificial intelligence (AI) in Wuhan, one of its capital cities. The AI police station will likely focus on vehicle- and driver-related issues, which makes it more analogous to an American Department of Motor Vehicles (DMV) than a precinct (sorry, Robocop), but the decision to build it is still right inline with China’s plans to be a world leader in AI by 2030.
According to a report from the Chinese financial paper Caijing Neican, the futuristic station will offer simulated driver examinations and provide registration services. Cutting-edge facial-recognition technology developed by Tencent will identify citizens within the station. The idea is that this will eliminate the need for users to sit at stations for long periods of time, sign up for accounts, or download apps — the AI will access all pertinent information as soon as it sees the person’s face.
At first blush, the AI police station sounds like it could be an elegant, smooth addition to Chinese infrastructural services. Since it will be open for public service 24/7, with fully-committed hardware, it could eliminate the many slip-ups caused by human workers and frustrating web-based failures. If successful, the station could lead to the development of additional unmanned government facilities in China or abroad.
However, while these high-tech stations could benefit visitors, their creation opens up the wider discussion as to how advancements in robotics and automated systems will affect society at large, particularly in terms of work.
Will a fully-automated world leave most humans behind to starve in poverty? Would an automated infrastructure force humanity to self-actualize into a philanthropic and classless society? Will humans cease to be defined by economic net-worth and the realpolitik of lifelong careerists? Or will class disparities widen as the richest portion of society hordes AI resources, leading to a regressive, mercantile capitalism?
Many studies have predicted an ineluctable global progression toward automated workplaces. A 2013 Oxford University study predicted that 47 percent of U.S. jobs will be automated in the next 20 years, and in 2016, Forrester predicted that cognitive technologies will take over seven percent of jobs in the U.S. by 2025.
“If societies do not adapt quickly enough, this can cause instability,” Irakli Beridze, senior strategic adviser at the UN Centre for Artificial Intelligence and Robotics told Dutch newspaper de Telegraaf. Although his office is focused on helping the world maximize the benefits of AI, he is aware of the potential for a negative outcome and hopes to help the world avoid any pitfalls.
“One of our most important tasks is to set up a network of experts from business, knowledge institutes, civil society organizations, and governments,” Beridze said. “We certainly do not want to plead for a ban or a brake on technologies. We will also explore how new technology can contribute to the sustainable development goals of the UN. For this, we want to start concrete projects. We will not be a talking club.”
The potential harm AI can cause is not limited to joblessness — an ill-meaning hacker with the keys to a city’s fully automated infrastructure and public services could cause serious destruction. How could the mayor of New York, for example, deny the demands of a hacker who, with a push of a button, could gas a hospital full of patients or raise a fully loaded bridge or send a nuclear power plant into a meltdown?
Infrastructural automation like the AI police station China plans to build could solve a number of the world’s problems, but with it will come new challenges. Before we begin relying on AI and automation too heavily, we’ll need to explore all possible scenarios and do whatever is necessary to ensure the benefits outweigh the risks.