China’s “Minority Report” Style Plans Will Use AI to Predict Who Will Commit Crimes
“If someone buys a kitchen knife that’s OK, but if the person also buys a sack and a hammer later, that person is becoming suspicious."
Authorities in China are exploring predictive analytics, facial recognition, and other artificial intelligence (AI) technologies to help prevent crime in advance. Based on behavior patterns, authorities will notify local police about potential offenders.
Cloud Walk, a company headquartered in Guangzhou, has been training its facial recognition and big data rating systems to track movements based on risk levels. Those who are frequent visitors to weapons shops or transportation hubs are likely to be flagged in the system, and even places like hardware stores have been deemed “high risk” by authorities.
A Cloud Walk spokesman told The Financial Times, “Of course, if someone buys a kitchen knife that’s OK, but if the person also buys a sack and a hammer later, that person is becoming suspicious.” Cloud Walk’s software is connected to the police database across more than 50 cities and provinces, and can flag suspicious characters in real time.
China is also using “personal re-identification” in crime prediction: identifying the same person in different places, even if they’re wearing different clothes. “We can use re-ID to find people who look suspicious by walking back and forth in the same area, or who are wearing masks,” Beijing University of Aeronautics and Astronautics professor of bodily recognition Leng Biao told The Financial Times. “With re-ID, it’s also possible to reassemble someone’s trail across a large area.”
China is, in many ways, the ideal place to use this kind of technology. The government has an extensive archive of data from citizen records and more than 176 million surveillance cameras. In other words, China has an embarrassment of riches when it comes to big data, and can train its AI systems very effectively, without any meaningful legal hurdles.
AI And Safety
These aren’t the only ways that China is extending its AI capabilities. The government just revealed a massive, well-organized and funded plan to make China the global leader in AI by 2030. The nation deploys facial recognition in schools to counter cheating, on streets to fight jaywalking, and even in bathrooms to limit toilet paper waste. It should come as no surprise that the Chinese government would also employ these technologies to prevent crime — and maybe even predict it.
“If we use our smart systems and smart facilities well, we can know beforehand . . . who might be a terrorist, who might do something bad,” China’s vice-minister of science and technology Li Meng said to The Financial Times.
However you feel about China’s Minority Report style plans, AI is making the world safer. Although AI is certainly a potential surveillance tool, it can also be used to protect privacy, keep healthcare records private, secure financial transactions, and prevent hacking. AI is responsible for smart security cameras, robot guards, and better military technologies. AI is also the reason self-driving cars are about to eliminate at least 90 percent of traffic fatalities. In other words, while you might object to certain applications, it’s hard to argue against AI technology on the whole if you’re concerned with the future of safety and privacy both online and off.