This feels like a recipe for disaster.

Sure, Guy

A crime-prognosticating AI now exists, and it's been implemented in a number of American cities. But don't worry, it definitely won't be misused by city police forces — at least according to its lead creator, University of Chicago professor Ishanu Chattopadhyay.

Chattopadhyay recently sat down with BBC Science Focus to discuss the AI system, which, as a study published in the journal Human Behavior claims, can predict where and when a crime might occur with 80 to 90 percent accuracy. But whether the predictions hold up in the real world, while of course important, isn't necessarily the core question here: really, the question is whether AI can be successfully incorporated into a police force without abuses — and while Chattopadhyay believes his system can be, AI's track record in policing says otherwise.

"People have concerns that this will be used as a tool to put people in jail before they commit crimes," Chattopadhyay told BBC. "That's not going to happen, as it doesn't have any capability to do that."

Bad Record

The incorporation of AI into policing is hardly new, and neither is the controversy surrounding it — similar software has already been implicated in wrongful imprisonment and even the wrongful death of an unarmed 13-year-old child. More generally, AI across the board is known to be fickle, riddled with racial and otherwise discriminatory biases, and is notoriously difficult to explain, even by those who build it.

Speaking to the BBC, Chattopadhyay — who says that he hopes his AI will be used to curb crime through social and political measures, not just excessive policing — acknowledged some of these concerns, particularly AI's well-documented racism problems. He believes that other AI systems are just too simplistic, relying far too heavily on information like arrest histories and individual characteristics. In contrast, his system uses only event log data, which he claims helps to "reduce bias as much as possible."

"It just predicts an event at a particular location," he added. "It doesn't tell you who is going to commit the event or the exact dynamics or mechanics of the events. It cannot be used in the same way as in the film 'Minority Report.'"

Sure. Look, we're never ones to shy away from optimism, but given the record of systemic abuses of similar experimental cop-assisting softwares — not to mention the systemic abuses against vulnerable populations that already exist within American policing, minus any AI — this optimism seems far fetched.

READ MORE:  An algorithm can predict future crimes with 90% accuracy. Here’s why the creator thinks the tech won’t be abused [BBC Science Focus]

More on AI projects we don't need: Cursed Startup Using AI to Remove Call Center Workers' Accents


Share This Article