Much of today’s artificial intelligence (AI) relies on machine learning: where machines respond or react autonomously after learning information from a particular data set. Machine learning algorithms, in a sense, predict outcomes using previously established values. Researchers from OpenAI discovered that a machine learning system they created to predict the next character in the text of reviews from Amazon developed into an unsupervised system that could learn representations of sentiment.
“We were very surprised that our model learned an interpretable feature, and that simply predicting the next character in Amazon reviews resulted in discovering the concept of sentiment,” OpenAI, a non-profit AI research company whose investors include Elon Musk, Peter Thiel, and Sam Altman, explained on their blog. OpenAI’s neural network was able to train itself to analyze sentiment by classifying reviews as either positive or negative, and was able to generate text with a desired sentiment.
The AI was a multiplicative long short-term memory (LSTM) trained for a month with “4,096 units on a corpus of 82 million Amazon reviews to predict the next character in a chunk of text.” After training this mLSTM, the researchers turned the model into a sentiment classifier using a linear combination of these units. When they noticed that their model was using few of the learned units, they discovered that there was a single “sentiment neuron” with a highly predictive sentiment value.
The sentiment analysis capabilities of this AI surpassed every other approach used in the Stanford Sentiment Treebank; a small — but extensively studied — sentiment analysis data-set. The AI boasts a 91.8 percent accuracy — higher than the previous best of 90.2 percent.
Unsupervised learning algorithms are the dream of machine learning researchers. It’s basically an AI capable of learning on its own, which eliminates the need to feed it labelled or organized data. OpenAI’s mLSTM achieved that, but its developers recognize that it may not be the only machine capable of such unsupervised learning:
“We believe the phenomenon is not specific to our model, but is instead a general property of certain large neural networks that are trained to predict the next step or dimension in their inputs.”
Being able to learn unsupervised would give AIs a significant boost: decreasing the time required for them to be trained, and at the same time, improving performance. Such an AI could, for instance, provide skilled virtual assistance by analyzing, or even predicting, a user’s needs. These — and any other applications imaginable — still require further study of how this unsupervised algorithm developed.
“Our results are a promising step towards general unsupervised representation learning,” OpenAI’s researchers explained. “We found the results by exploring whether we could learn good quality representations as a side effect of language modeling, and scaled up an existing model on a carefully-chosen dataset. Yet the underlying phenomena remain more mysterious than clear.”