Neural networks are great at solving problems, but bad at showing their work.

BLACK BOXES

The type of artificial intelligence known as a neural network can be trained to complete tasks once thought to be exclusive to humans, such as driving a car, creating visual art, or composing a heavy metal album. But neural networks have a big problem: they're really complicated. They're so complex, in fact, that researchers have often struggled to explain precisely why they make specific decisions.

Now, researchers at the Massachusetts Institute of Technology (MIT) say they've created a neural network that can explain the steps it took to solve a problem — an advance that could help us better understand how the technology works and alleviate safety concerns in riskier applications, like self-driving cars.

SUBTASK FORCE

The new algorithm, called the Transparency by Design Network (TbD-net), breaks down the process of recognizing an image into subtasks. For each subtask, it draws a heat map that highlights the parts of the image it's focusing on.

Say you ask the algorithm to identify large metal cubes in an image that contains a variety of objects. First it would highlight only the large objects — some might be cubes, but some would probably be other things, too. Then it would highlight the large metal objects, and finally the large metal cubes. The result is a neural network that can show how it got to a particular conclusion in a way a human can understand.

A STEP FORWARD

Now imagine how useful that would be if algorithms like TbD-net were used with neural networks that solve other types of problems. “Whether it’s an investment decision, a medical decision, or maybe a military decision, you don’t want to just rely on a 'black box' method,” said Tommi Jaakkola, an MIT professor who wasn't involved with TbD-net, in an interview last year with the MIT Tech Review.

Anything that provides new insights into how AI works and soothes experts who are jittery about turning over important tasks over to technology? Can't be bad for our future.

READ MORE: Artificial intelligence system uses transparent, human-like reasoning to solve problems [MIT News]

More on neural networks: DeepMind Develops a Neural Network That Can Make Sense of Objects Around It


Share This Article