Automated Sensitivity

A New Algorithm Trains AI to Erase its Biases

It's like sensitivity training for algorithms.

1. 29. 19 by Dan Robitzski
hobs/Victor Tangermann
Image by hobs/Victor Tangermann

Sensitivity Training

In recent years, artificial intelligence has struggled with a major PR problem: whether or not it’s intentional, developers keep programming biases into their systems, creating algorithms that reflect the same prejudiced perspectives common in society.

That’s why it’s intriguing that engineers from MIT and Harvard University say they’ve developed an algorithm that can scrub the bias from AI — like sensitivity training for algorithms.

Machines Teaching Machines

The tool audits algorithms for biases and helps re-train them to behave more equitably, according to new research presented this week at the Conference on Artificial Intelligence, Ethics and Society.

And even then, once complex AI systems deploy in the real world, it becomes very difficult to evaluate how exactly they’re making their decisions. That’s why automating the process is so important — the new tool can go in and reconfigure how much value the AI system gives to each aspect of its training data, according to the research.


For instance, if an algorithm was trained to determine that black people would be poor candidates for a job, the new tool would feasibly be able to teach the algorithm to evaluate candidates on the relevant factors of their applications instead.

Systemic Problem

Of course, it’s possible the new algorithm could have biases of its own. But given that artificial intelligence systems are already out in the field actively recommending that cops over-police areas with more racial minorities, it’s urgently important that researchers tackle algorithmic bias.

“Facial classification in particular is a technology that’s often seen as ‘solved,’ even as it’s become clear that the datasets being used often aren’t properly vetted,” Alexander Amini, an MIT AI researcher who helped develop the new tool, told TechXplore. “Rectifying these issues is especially important as we start to see these kinds of algorithms being used in security, law enforcement and other domains.”

READ MORE: An AI that ‘de-biases’ algorithms [TechXplore]


More on algorithmic bias: Microsoft Announces Tool To Catch Biased AI Because We Keep Making Biased AI

Futurism Readers: Find out how much you could save by switching to solar power at By signing up through this link, may receive a small commission.

Share This Article

Keep up.
Subscribe to our daily newsletter to keep in touch with the subjects shaping our future.
I understand and agree that registration on or use of this site constitutes agreement to its User Agreement and Privacy Policy


Copyright ©, Camden Media Inc All Rights Reserved. See our User Agreement, Privacy Policy and Data Use Policy. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with prior written permission of Futurism. Fonts by Typekit and Monotype.