The News Today
Partial Credit

Google Brain Built a Translator so AI Can Explain Itself

January 11th 19__Dan Robitzski__Filed Under: Artificial Intelligence
Tag Hartman-Simkins

Show Your Work

A Google Brain scientist built a tool that can help artificial intelligence systems explain how they arrived at their conclusions — a notoriously tricky task for machine learning algorithms.

The tool, called Testing with Concept Activation Vectors or TCAV for short, can be plugged into machine learning algorithms to suss out how much they weighted different factors or types of data before churning out results, Quanta Magazine reports.


Tools like TCAV are in high demand as AI finds itself under greater scrutiny for the racial and gender bias that plagues artificial intelligence and the training data used to develop it.

With TCAV, people using a facial recognition algorithm would be able to determine how much it factored in race when, say, matching up people against a database of known criminals or evaluating their job applications. This way, people will have the choice to question, reject, and maybe even fix a neural network’s conclusions rather than blindly trusting the machine to be objective and fair.

Good Enough!

Google Brain scientist Been Kim told Quanta that she doesn’t need a tool that can totally explain AI’s decision-making process. Rather, it’s good enough for now to have something that can flag potential issues and give humans insight into where something may have gone wrong.

She likened the concept to reading the warning labels on a chainsaw before cutting down a tree.

“Now, I don’t fully understand how the chain saw works,” Kim told Quanta. “But the manual says, ‘These are the things you need to be careful of, so as to not cut your finger. So, given this manual, I’d much rather use the chainsaw than a handsaw, which is easier to understand but would make me spend five hours cutting down the tree.”

READ MORE: A New Approach to Understanding How Machines Think [Quanta Magazine]