Quantcast
Robert Anders / Lloyd Humphreys / Emily Cho
Artificial Intelligence

The Military Wants to Make AI That Mimics the Human Brain. Experts Know There’s a Better Way.

There’s a good reason the first flying machines weren’t mechanical bats: people tried that, and they were terrible.

Dan RobitzskiApril 11th 2018

No matter how many times you may hear that AI is going to make us human slaves and take over the world, it’s kind of hard to believe when we’re constantly confronted with AI that’s consistently stupid. A few reminders: Alexa once played porn when someone requested a children’s song; AI playing one of those old text-based computer games got stuck when it kept giving nonsense commands.

While that might save us from a Skynet-type situation, it’s problematic as we use AI for increasingly sophisticated applications, such as robotic prosthetics, writes DARPA’s Justin Sanchez in the Wall Street Journal. Brains and computers process information very differently, and the software for a prosthetic arm can’t keep up with all the different ways a person’s brain might attempt to control it. The result is that prosthetics spend an awful lot of time sitting still.

What if the software was better adapted to how brains actually work?

DARPA thinks it found the answer: train AI to read and adapt along with the brain’s signals, learning what we are thinking and why as we do it. In short: teach AI to function more like the human brain.

Sounds good, right? In practice, however, it would mean jumping over a much bigger hurdle, one that has tripped up a great deal of researchers on their race to create a truly intelligent machine: figuring out how in the hell our brains work. Doing that would allow for a seamless interface between brain and machine that could, to continue their example, give an amputee perfect control over their artificial limbs. And if their plan wasn’t batshit enough, it even has some scientists speculating on whether AI might be able to hallucinate or develop depression.

Hey but here’s a handy thing to know about the human brain: we really have no idea what’s going on in there.

And it just so happens that a number of leading AI researchers think that trying to decode and mimic the human brain is a waste of time.

Max Tegmark, an MIT physicist and director of the Future of Life Institute, has a few choice words for those attempting to digitally recreate the human brain. Namely, he calls it “carbon chauvinism.”

“We’re too obsessed with how our brain works, and I think that shows a lack of imagination,” he said during a panel on AI last September.

“The main progress right now and in the near future will be getting to a performance at a human-level without getting the details of the human brain all figured out,” Bart Selman, an AI researcher at Cornell University, told Business Insider.

There’s nothing wrong with mimicking the natural world in technology. For an amputee controlling a prosthetic, software based on how the brain processes language could be invaluable.

But the key there is to be inspired by existing biology while creating something new based on the framework of the technology itself — in this case, intelligence and information processing.

There’s a very good reason the first flying machines didn’t imitate the way bats fly, and the first cars weren’t based on horses and buggies: people tried that, and they were terrible. AI is no different. And the sooner we can move away from the idea that we should try to copy an incredible computer that we don’t understand (our brains), the more AI can advance.

Next Article
////////////