Futurism
Proud Parents

Google’s DeepMind Says It Has All the Tech It Needs for General AI

byDan Robitzski
Jun 10
Futurism

Reinforcement learning may be able to teach itself how to reach true intelligence.

Solid Foundation

In order to develop artificial general intelligence (AGI), the sort of all-encompassing AI that we see in science fiction, we might need to merely sit back and let a simple algorithm develop on its own.

Reinforcement learning, a kind of gamified AI architecture in which an algorithm “learns” to complete a task by seeking out preprogrammed rewards, could theoretically grow and learn so much that it breaks the theoretical barrier to AGI without any new technological developments, according to research published by the Google-owned DeepMind last month in the journal Artificial Intelligence and spotted by VentureBeat. While reinforcement learning is often overhyped within the AI field, it’s interesting to consider that engineers could have already built all the tech needed for AGI and now simply need to let it loose and watch it grow.

AI Primer

The kind of artificial intelligence that we encounter every day of our lives, whether it’s machine learning or reinforcement learning, is narrow AI: an algorithm designed to accomplish a very specific task like predicting your Google search, spotting objects in a video feed, or mastering a video game. By contrast, AGI — sometimes called human-level AI intelligence — would be more along the lines of C-3PO from “Star Wars,” in the sense that it could understand context, subtext, and social cues. And, needless to say, it might even outstrip humans entirely.

For years, scientists have disagreed over whether we already have all the core components necessary for AGI or if building it would require some new kind of tech that hasn’t been invented yet. Now, it seems DeepMind has decided to join the former camp.

Advertisement

Tech Imitates Life

DeepMind’s argument essentially boils down to this: reward-seeking behavior was enough to drive the evolution of natural life, so why shouldn’t it do the same for artificial life?

The researchers wrote that “the generic objective of maximizing reward is enough to drive behavior that exhibits most if not all abilities that are studied in natural and artificial intelligence.”

Of course, this argument has several assumptions baked in, like that a reinforcement learning algorithm will be able to develop its way to true intelligence within whatever hardware architecture it was granted. But it’s still an interesting thought experiment that will be interesting to watch play out.

READ MORE: DeepMind says reinforcement learning is ‘enough’ to reach general AI [VentureBeat]

Advertisement

More on AGI: Why Should We Bother Building Human-Level AI? Five Experts Weigh In


Futurism Readers: Find out how much you could save by switching to solar power at UnderstandSolar.com. By signing up through this link, Futurism.com may receive a small commission.

Share This Article

Copyright ©, Camden Media Inc All Rights Reserved. See our User Agreement, Privacy Policy and Data Use Policy. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with prior written permission of Futurism. Fonts by Typekit and Monotype.