Well-intentioned? Yes. Invasive and thorny? Also yes.
A team of computer scientists has a well-intentioned but thorny plan to reduce recidivism, the rate at which prisoners return to prison once released, and it involves constantly monitoring them as they go about their lives.
The idea? Giving parolees (who would volunteer for this program) smartphones and biometric wearables to monitor their biological data, pictures they take, and location information, all in the hopes of training artificial intelligence to identify patterns linked to regressions into criminal behavior.
Insights that help keep people out of prison could be useful, of course, but how this program would glean those insights is (to put it lightly) ethically fraught.
The Purdue scientists mention in a press release that the AI algorithm they created would analyze data in clumps, rather than in real-time. The study design would leave half of the volunteers, who are already members of a vulnerable population, entirely to their own devices, calling into question the point of them participating at all. And the others, the monitored group, would likely be on their very best behavior, since they know scientists are watching their every move — which ultimately calls into question the integrity of this particular data, gathered in this particular way.
That said, there are noble motivations at work, here.
"The goal of the study is to identify opportunities for early intervention to better assist those individuals to integrate back into general society successfully," Purdue University computer scientist Marcus Rogers said in the release.
But then again, how many times must we suffer technological solutions (surveillance AI) to societal problems (criminal recidivism) before we realize that our problems might be too much too big for one algorithm to solve?
READ MORE: Artificial intelligence examines best ways to keep parolees from recommitting crimes [Purdue University]
More on prisoner AI: A Finnish Startup Is Using Prison Labor to Train AI