MIT’s scientists claim they can teach a new concept to a computer using a single example rather than thousands. if confirmed, this significantly reduced the requirements needed for machine learning.
They make use of an algorithm that takes advantage of “Bayesian Program Learning,” or BPL. This is when a computer creates its own additional examples after being fed data, and then determines which ones fit the pattern best.
The researchers behind BPL say they’re attempting to recreate the way humans are able to learn a new task after seeing it done once. “The gap between machine learning and human learning capacities remains vast,” one of the authors of the research paper, which was published last week in the journal Science, told GeekWire. “We want to close that gap, and that’s the long-term goal.”
Tenenbaum set the algorithm to work on a database of 1,623 handwritten characters drawn from 50 writing systems, including Sanskrit and Tibetan. To check the computer’s performance, the researchers set up “visual Turing tests.” They laid out characters drawn by different humans beside an equal number of characters drawn by the computer, and then asked human judges to say who did what. During each round of testing, only about 25 percent of the judges correctly identified the human-written versus the machine-written characters.
What This Means
The researchers stated that the BPL approach “can perform one-shot learning in classification tasks at human-level accuracy and fool most judges in visual Turing tests of its more creative abilities.” But there were also some limitations: Classifying the characters was achievable, but it would take the computer several minutes to run the algorithm.
Once the algorithm is polished, it could be incorporated into next-generation devices, Tenenbaum told GeekWire. “If you want a system that can learn new words for the first time very quickly… we think you will be best off using the kind of approach we have been developing here.”