Biological and artificial neurons recognize 3D objects in similar ways.
Spitting Image
A team of scientists found a surprising similarity between how human brains and artificial neural networks perceive the world.
In the human brain, visual information passes through several cortices that each interpret different aspects of an image, ultimately piecing together our perception of the world around us. A new study published Thursday in the journal Current Biology found that aspects of 3D shapes — like bumps and spheres — are interpreted early on in the process. And, it turns out, the same thing happens in artificial neural networks as well.
Convergent Evolution
It may not seem too shocking that neural networks, a kind of artificial intelligence architecture explicitly modeled after the brain, interprets information similarly. But scientists didn't know about this particular aspect of how biological brains work before they saw it in the algorithm AlexNet.
"I was surprised to see strong, clear signals for 3D shape as early as V4," Johns Hopkins University neuroscientist and study author Ed Connor said in a press release, referring to a specific visual cortex. "But I never would have guessed in a million years that you would see the same thing happening in AlexNet, which is only trained to translate 2D photographs into object labels."
Circular Learning
The unexpected parallel hints that neural networks can teach us about our brains just like we use what we know about the brain to develop new neural networks.
"Artificial networks are the most promising current models for understanding the brain," Connor said. "Conversely, the brain is the best source of strategies for bringing artificial intelligence closer to natural intelligence."
READ MORE: Researchers discover 'spooky' similarity in how brains and computers see [Johns Hopkins University]
More on neural networks: Physicist: The Entire Universe Might Be a Neural Network
Share This Article