AI is no longer just identifying suspected criminals from behind a camera; now it’s rendering photorealistic images of their mugs for cops to blast out on social media.
Enter ChatGPT, the latest member of the Goodyear Police Department, located on the outskirts of Phoenix. New reporting by the Washington Post revealed that Goodyear cops are using the generative AI tool to pop out photos of suspects in place of pen-and-paper police sketches.
“We are hopeful that these new techniques and AI technology will assist in solving more complex cases in the future, here in Arizona and around the country,” the Goodyear PD wrote on its social media account when it debuted its first profile. That AI image was meant to help locate a suspect in a kidnapping case, which the department said resulted in a huge influx of tips.
Notably, the AI photos haven’t led to any arrests.
But what’s odd about the initiative is the decision behind it. Speaking to WaPo, Mike Bonasera, the sketch artist for the Goodyear PD, said it has a lot to do with social media engagement.
“We’re now in a day and age where if we post a pencil drawing, most people are not going to acknowledge it,” the sketch artist admitted. Basically, the Goodyear PD believes that local residents, especially the younger crowd, are much more likely to interact with a hyper-realistic AI rendering.
“People are so visual, and that’s why this works,” Bonasera said.
The Goodyear PD insists their AI renderings aren’t “AI fabrications.” This, they say, is because the AI images begin as traditional composite drawings — which are already unreliable — before they feed them into ChatGPT.
That’s a dicey contention. When an image generator like ChatGPT spits out a picture of a person, it’s drawing on a huge database of photos of real people. This means whatever biases were present in the collection of photos — for example, the ratio of white to Black faces — will show up in the final image.
“That was something we saw early on with some of these generators,” Bryan Schwartz, a law professor at the University of Arizona told WaPo. “That they were really good at creating white faces and not as good at creating some of other races.”
When we’re talking about creating images of real people, that kind of system-wide bias can have devastating results — especially when cops refuse to acknowledge it. Police departments relying on AI facial recognition to track down suspects, for example, have already led to numerous false arrests in cities including Detroit, New York, and Atlanta.
Luckily, Goodyear seems to be the only police department in the US using AI to blast biased composites of suspects, or at least admitting to it. Unfortunately, though, cops love their tech — meaning it might just be a matter of time before we see more of this nationwide.
More on policing: Police Use Busted Facial Recognition System, Arrest Random Man and Accuse Him of Horrible Crime