While developing negotiating chatbot agents, researchers at the Facebook Artificial Intelligence Research (FAIR) lab noticed back in June that the artificially intelligent (AI) bots had spontaneously developed their own non-human language.
In a report explaining their research, they noted that this development spawned from the systems’ goal of improving their negotiation strategies — the system of code words they started to use were clearly designed to maximize the efficiency of their communication.
Although the bots started out speaking English, the researchers realized they failed to provide a reward for speaking English. In other words, the systems had no reason to stick to English as it didn’t contribute to their end goal of becoming more efficient negotiators. In fact, the systems had multiple incentives to veer away from the language, the same way communities of humans with expertise or niche knowledge create and use shorthand to discuss complex ideas more quickly or efficiently.
In that sense, the behavior should have been predictable. It was, in some sense, a very human adaptation as it was designed to enhance performance and minimize effort — something the human brain excels at.
As they explained in their June post, the researchers could decode the new language with fairly little trouble as it was still English-based, but they could never be certain that their translations were 100 percent correct. “It’s important to remember, there aren’t bilingual speakers of AI and human languages,” FAIR’s Dhruv Batra told Fast Code Design. This new language also didn’t serve the purpose of the research. “Our interest was having bots who could talk to people,” explained Mike Lewis, a research scientist at FAIR.
In the end, the researchers decided to tweak the agents to avoid this drifting away from English.
The initial spontaneous development of the independent language highlights how much we still don’t understand about AIs, which is a huge part of the debate regarding AI research. AI could undoubtedly help us, and very few dispute that the technology is here to stay. However, the way we prepare for a world shared with AI, and whether or not that world will be safe for humans, is hotly debated.
To be sure, much AI-related fear is based more in science-fiction than fact. According Nigel Shadbolt, Oxford professor of artificial intelligence and chairman of the Open Data Institute, “We most certainly need to consider the restraints and safeguards that we need to engineer into the hardware, software, and deployment policies of our current AI systems. But the next self-aware computer you encounter will only be appearing at a cinema near you.”
The language issue cropping up at FAIR and elsewhere appears to fall squarely within the realm of restraints and safeguards. Should we allow AIs to develop task-specific dialects if they improve performance, knowing it would mean we couldn’t truly understand what they were saying?
Many experts urge that we err on the side of caution. Georgia Tech AI researcher Mark Riedl told Future of Life that AIs trained to optimize rewards could eventually come to see humans as a threat to their optimization plans.
Perhaps the most vocal warnings about AI advancing too quickly have come from Elon Musk and Stephen Hawking. One of the most salient points in their arguments is that by the time we perceive a risk, it may be too late. That may be the best argument of all for shutting down the chatter in a project like this.