Advances in machine learning and deep learning systems are bring us much closer to developing true artificial intelligence (AI) than ever before. One major limitation to these systems, though, is the effort required to teach them, with most requiring thousands or even hundreds of thousands of examples before they can “learn” something new.
Self-driving car systems absorb miles of traffic data to learn basic driving lessons, and this scary image generator had to be fed 200,000 images for it to recognize a normal face. However, a new development from the team at Google DeepMind may be the start of leveling out that steep learning curve for AI systems.
To speed up the learning process, Google DeepMind researcher Oriol Vinyals added a memory component to a deep-learning system. Now, the system just requires samples from a few hundred image categories before it can recognize an object after seeing just one example, a skill referred to as “one-shot learning.” The algorithm’s accuracy was near that of a typical system that is fed much more data.
Teaching AI to learn faster is essential if we ever hope for it to match the level of human learning. The goal is to make teaching AI like teaching babies — they should ideally be able to learn a lot from a smaller amount of information. If successful, efforts in this field could allow for not just faster AI, but better and more accurate AI.
For Google, a system like that developed at DeepMind could improve search engines by quickly learning new search terms and then outputting better results. It could be used to analyze and recognize handwriting, improve how autonomous cars navigate the world around them, create better speech-recognition software…any field that uses AI could benefit from systems that learn more accurately, more quickly.