Rethinking How the Human Brain Remembers

New research from scientists at the Zuckerman Institute of Columbia University has turned our classical understanding of the way the human brain perceives and recalls on its head. The work shows that when the brain observes it first processes details to construct internal models of more complex objects. However, when it recalls, the brain first thinks of those complex representational models, and then goes back to the details it first perceived to create them. The research relied upon Bayes theorem and other mathematical modeling, and may have practical applications in many places — from evaluating testimony in the courtroom to treating people with sensory processing differences such as those with autism.

Lacking direct evidence, scientists have always assumed that perception and decoding followed the same hierarchy: from details to complex objects. This research proves that this assumption was incorrect as to the decoding process that takes place during recall.

The team unraveled the decoding hierarchy of the human brain by focusing on simple recall tasks that could be clearly interpreted. In the first task, subjects had half a second to view a line angled at 50 degrees on a computer screen. After it disappeared, they moved two dots on the screen to approximate the angle they remembered, and repeated this task 50 times. The second task was identical, except the angle of the line was 53 degrees. In the third task, the subjects saw both lines at the same time, and then tried to match pairs of dots to each angle.

 

Image Credit: Ning Qian/Columbia University, Zuckerman Institute

“Memories of exact angles are usually imprecise, which we confirmed during the first set of one-line tasks. So, in the two-line task, traditional models predicted that the angle of the 50-degree line would frequently be reported as greater than the angle of the 53-degree line,” Mortimer B. Zuckerman Mind Brain Behavior Institute neuroscientist and study principal investigator Ning Qian said in a press release. However, that wasn't what happened, and several other findings also contrasted with traditional models.

Explaining the Process

The authors proposed that context is more important than details are in everyday life, so reverse decoding makes sense. For example, when we see a new face, their expression — such as anger or friendliness — is what really matters, and we only focus on details such as the shape of their features later, if need be, and we do so by estimating. “Even your daily experience shows that perception seems to go from high to low levels,” Dr. Qian said in the press release.

Click to View Full Infographic

The team then created a mathematical model to explain what they believe happens in the brain using Bayesian inference, with the higher-level complex features as the prior information in the statistical model for decoding lower-level features, rather than the details being used to decode or recall the bigger picture. The model's predictions were a good fit for the behavioral data.

Moving forward, the researchers plan to apply their work in studies of long-term memory, not just simple perception. This could have major implications in many areas, from assessing the credibility of witnesses in court and treating people with sensory processing issues to assessing the credibility of presidential candidates. It could even help computer scientists study the progression of microchips that rival the power of the human brain, as they begin to possess similar perceptual acuity.


Share This Article