Researchers at Carnegie Mellon University have a new project: Reverse-engineer the brain. Ultimately, their goal is to “make computers think more like humans.” Now, their five-year research effort has been funded by the U.S. Intelligence Advanced Research Projects Activity (IARPA) for $12 million.
The research effort, through IARPA’s Machine Intelligence from Cortical Networks (MICrONS) research program, is part of the U.S. BRAIN Initiative to revolutionize the understanding of the human brain. It’s being led by Tai Sing Lee, a professor in the Computer Science Department and the Center for the Neural Basis of Cognition (CNBC).
“MICrONS is similar in design and scope to the Human Genome Project, which first sequenced and mapped all human genes,” Lee said. “Its impact will likely be long-lasting and promises to be a game changer in neuroscience and artificial intelligence.”
On a broad scale, the researchers hope to discover the rules that the brain’s visual system uses to process information. They believe that, with this understanding, they can revolutionize machine learning algorithms and computer vision.
Specifically, the researchers want to improve the performance of artificial neural networks — computational models for artificial intelligence inspired by the central nervous systems of animals. This type of technology is more common than you might think. It’s used in self-driving cars, facial recognition technology, and can be used to understand speech and handwriting.
However, the technology is a bit outdated.
“Today’s neural nets use algorithms that were essentially developed in the early 1980s,” Lee said. “Powerful as they are, they still aren’t nearly as efficient or powerful as those used by the human brain. For instance, to learn to recognize an object, a computer might need to be shown thousands of labeled examples and taught in a supervised manner, while a person would require only a handful and might not need supervision.”
In order to make updates and improvements, the team sought out to better understand the brain’s connections. Sandra Kuhlman, assistant professor of biological sciences at Carnegie Mellon and the CNBC, plans to use a technique called “two-photon calcium imaging microscopy” to record signaling of tens of thousands of individual neurons in mice as they process visual information.
This is considered an unprecedented feat as, in the past, only a single neuron, or tens of neurons, typically have been sampled in an experiment, she noted.
“By incorporating molecular sensors to monitor neural activity in combination with sophisticated optical methods, it is now possible to simultaneously track the neural dynamics of most, if not all, of the neurons within a brain region,” Kuhlman said. “As a result we will produce a massive dataset that will give us a detailed picture of how neurons in one region of the visual cortex behave.”
All of the information discovered by the team will be compiled into databases that will be made publicly available for research groups all over the world.
CMU researchers and collaborators hope to use these massive databases to evaluate learning models and thus improve their understanding of the brain’s computational principles. Lee believes the project will result in machines that have more human-like qualities. In addition, it will build better computer algorithms for learning and pattern recognition.
“The hope is that this knowledge will lead to the development of a new generation of machine learning algorithms that will allow AI machines to learn without supervision and from a few examples, which are hallmarks of human intelligence,” Lee said.
However, not everyone is on board with the project. Yann LeCun, Director of AI Research at Facebook and a professor at New York University, does not believe we should copy the brain to build intelligent machines. “We need to understand the underlying principles of intelligence to know what to copy. But we should draw inspiration from biology,” he says.