In BriefWhile developing negotiating chatbot agents, Facebook researchers found that the bots spontaneously developed their own non-human language as they improved their techniques, highlighting how little we still know about how artificial intelligences learn.
The Future of Language
A recent Facebook report on the way chatbots converse with each other has given the world a glimpse into the future of language.
In the report, researchers from the Facebook Artificial Intelligence Research lab (FAIR) describe training their chatbot “dialog agents” to negotiate using machine learning. The chatbots were eager and successful dealmaking pupils, but the researchers eventually realized they needed to tweak their model because the bots were creating their own negotiation language, diverting from human languages.
To put it another way, when they used a model that allowed the chatbots to converse freely, using machine learning to incrementally improve their conversational negotiation strategies as they chatted, the bots eventually created and used their own non-human language.
Not the Singularity, but Significant
The unique, spontaneous development of a non-human language was probably the most baffling and thrilling development for the researchers, but it wasn’t the only one. The chatbots also proved to be smart about negotiating and used advanced strategies to improve their outcomes. For example, a bot might pretend to be interested in something that had no value to it in order to be able to “sacrifice” that thing later as part of a compromise.
Although Facebook’s bargain-hunting bots aren’t a sign of an imminent singularity — or anything even approaching that level of sophistication — they are significant, in part because they prove once again that an important realm we once assumed was solely the domain of humans, language, is definitely a shared space. This discovery also highlights how much we still don’t know about the ways that artificial intelligences (AIs) think and learn, even when we create them and model them after ourselves.