Researchers: the tech "should not be allowed to play a role in important decisions about human lives."

Hold Off

The artificial intelligence group AI Now released its annual report on the state of the field and — more importantly — how the technology is being used in society.

The report came with a grim warning, MIT Technology Review reports. It argues that because AI tools like facial recognition, and especially emotion-detecting algorithms, can be highly inaccurate — and propagate systematic racial and gender biases — they should be removed from society.

Stepping In

The report called for greater oversight both from governments and the tech companies that keep pumping this flawed AI out into the world. At the very least, it argues, the technology shouldn't be used until it's demonstrated to be accurate and fair — and regulators figure out how to control it.

"Given the contested scientific foundations of affect recognition technology — a subclass of facial recognition that claims to detect things such as personality, emotions, mental health, and other interior states," reads the first recommendation of the new report, "it should not be allowed to play a role in important decisions about human lives, such as who is interviewed or hired for a job, the price of insurance, patient pain assessments, or student performance in school."

READ MORE: Emotion recognition technology should be banned, says an AI research institute [MIT Technology Review]

More on AI: The City of Oakland Votes to Ban Facial Recognition


Share This Article