Cracking the AI Black Box

Artificial intelligence (AI) has grown by leaps and bounds over the past years. Now there are AI systems capable of driving cars and making medical diagnoses, as well as numerous other choices which people make on a day-to-day basis. Except that when it comes to humans, we actually can understand the reasoning behind such decisions (to a certain extent).

When it comes to AI, however, there's a certain "black box" behind decisions that makes it so that even AI developers themselves don't quite understand or anticipate the decisions an AI is making. We do know that neural networks are taught to make these choices by exposing them to a huge data set. From there, AIs train themselves into applying what they learn. It's rather difficult to trust what one doesn't understand.

Click to View Full Infographic

The U.S. Defense Advanced Research Projects Agency (DARPA) wants to break this black box, and the first step is to fund eight computer science professors from Oregon State University (OSU) with a $6.5 million research grant. “Ultimately, we want these explanations to be very natural — translating these deep network decisions into sentences and visualizations,” OSU's Alan Fern, principal investigator for the grant, said in a press release.

Sound and Informed Choices

The DARPA-OSU program, set to run for four years, will involve developing a system that will allow AI to communicate with machine learning experts. They would start developing this system by plugging AI-powered players into real-time strategy games like StarCraft. The AI players would be trained to explain to human players the reasoning behind their in-game choices. This isn't the first project that puts AIs into video game environments. Google's DeepMind has also chosen StarCraft as a training environment for AI. There's also that controversial Doom-playing AI bot.

Results from this research project would then be applied by DARPA to their existing work with robotics and unmanned vehicles. Obviously, the potential applications of AI in law enforcement and the military require these systems to be ethical.

“Nobody is going to use these emerging technologies for critical applications until we are able to build some level of trust, and having an explanation capability is one important way of building trust,” Fern said. Thankfully, this DARPA-OSU project isn't the only one working on humanizing AI to make it more trustworthy.


Share This Article