As the rest of the company is mandated to work towards Mark Zuckerberg's metaverse dreams, Facebook's artificial intelligence chief is quietly building a roadmap towards "autonomous" machine intelligence.

Case in point, Meta AI Chief and famed computer scientist Yann LeCun published a paper earlier this summer — and presented it last week at Berkeley — that describes a lack of "common sense" in current AI effort and lays out a pathway to future iterations that "learn as efficiently as humans and animals" as they become increasingly autonomous.

Common sense, as described by LeCun, is a collection of "models of the world" that allow humans and animals to predict whether events are likely or unlikely, plausible or implausible, and possible or impossible.

"A self-driving system for cars may require thousands of trials of reinforcement learning to learn that driving too fast in a turn will result in a bad outcome, and to learn to slow down to avoid skidding," the pioneering AI researcher wrote. "By contrast, humans can draw on their intimate knowledge of intuitive physics to predict such outcomes, and largely avoid fatal courses of action when learning a new skill."

To bridge the gap between the many iterations of trial and error required to train neural networks and the "intuitive" nature of organic knowledge, LeCun proposes retooling algorithm training methodology to learn more efficiently and thus develop a composite of the common sense we humans take for granted.

While it doesn't sound all that sexy, something akin to intuition will likely be required to move AI from its current — and admittedly impressive, albeit in narrower domains — state to something closer to human intelligence.

"It’s a practical problem because we really want machines with common sense," LeCun said during his late September talk at Berkeley. "We want self-driving cars, we want domestic robots, we want intelligent virtual assistants."

Because he's a scientist, the Meta AI chief's next generation algorithm-training architecture involves a bunch of moving parts such as a system that replicates short-term memory, another that teaches neural networks self-criticism, and the establishment of a "configurator" module that synthesizes all inputs into useful information. Here's his diagram:

Taken together, these components are meant to help machine intelligence replicate the processes of the human mind — a prospect as fascinating as it is terrifying.

That Meta's top AI researcher is quietly circulating a paper on how to make AI into autonomous "thinkers" is itself a hugely intriguing story, and given what he hopes to achieve, it may even be a huge boon for the ailing tech giant.

More AI: Woman Horrified to Discover Her Private Medical Photos Were Being Used to Train AI


Share This Article