Law enforcement has embraced artificial intelligence tech to make the lives of officers a little easier. Yet the same tech is already turning into a considerable headache both for its own operations and members of the communities where they work.
From kids sending their parents AI-manipulated pictures of them welcoming homeless men into their houses that trigger 911 calls to cops arresting the wrong perpetrators based on the suspicions of dubious AI tools, the tech isn’t exactly fostering peace and order.
Now, police in Oregon are warning that AI apps like CrimeRadar are generating misinformation based on hallucinated police radio chatter, as Central Oregon Daily News reports. CrimeRadar is designed to listen to police frequencies and turn incidents into AI-written blog posts — a disastrous idea that’s unsurprisingly turning into a major headache for law enforcement.
The AI is woefully misinterpreting what officers are saying on the radio, often reaching alarming — and entirely unfounded — conclusions. That information can then be passed on as real data on social media, leading to widespread confusion.
“The officer was at a Shop with a Cop [event] up in Redmond,” Bend police communications manager Sheila Miller told the Daily News, referring to a yearly holiday tradition involving deputies and volunteers going toy shopping with young kids. “It doesn’t understand what Shop a Cop means. So they say ‘shot with a cop,’ and now they’re suggesting that an officer has been shot in the line of duty in our community.”
“That’s scary for our community,” she added. “It’s really scary for police spouses or police family members. And it’s just wrong. And they don’t… there’s no accountability.”
It’s not just CrimeRadar. Earlier this year, 404 Media found that crime-awareness app Citizen was also using AI to write alerts and pass them on to users without any human review. As a result, the app was bungling facts and even exposing sensitive data, including license plate numbers, in the process.
“The next iteration was AI starting to push incidents from radio clips on its own,” an insider source at Citizen told 404 Media. “There was no analyst or human involvement in the information that was being pushed in those alerts until after they were sent.”
In short, it’s a frightening new reality that could compound the internet’s existing struggles with the proliferation of misinformation. We’ve already seen a tidal wave of AI slop hit online communities, causing mayhem.
The advent of AI-based image-generating tools, like Google’s extremely powerful Nano Banana app, has also caused concern among experts who worry that people could be framed for crimes they didn’t commit. Scammers are already using AI-based tools to clone the voices of their victims as part of widespread phishing schemes, raising alarm bells among federal agencies.
For now, AI-based police radio chatter apps remain online while operating from within a regulatory vacuum, a situation that, as Central Oregon Community College IT professor Eric Magidson told the Daily News, won’t change without legislation.
More on police and AI: Police Issue Warning About “AI Homeless Man” Prank