Reading the mind to reconstruct a face: Is dream reconstruction next?
When one of us takes in another’s face, it’s like a party in the brain. Signals dart from region to region as we piece together the eyes, the mouth, the emotional expression, the degree of attraction or fear we may feel, the memory of a familiar feature or mannerism. New research has found that, by listening in long enough to an individual’s brain as he or she gazes at many faces, one can sketch a pretty good facsimile of an unfamiliar new face that person is seeing.
Using the same technique, one might one day be able to reconstruct a facial image called to someone’s mind by memory, or even seen in a dream. Someday, a parent or therapist working with a child with autism or social anxiety might use it to teach, or to gain a glimpse of what a human face looks like to someone whose facial-processing skills are affected by a neuropsychiatric disorder.
The occipital cortex, as always, has a first crack at processing whatever the eyes behold. The processing of simple colors or shapes, or even of buildings or natural objects, activates relatively few regions beyond that. But when it comes to gazing upon another human being’s face, the picture is incomplete without input from some of the brain’s most highly evolved regions, such as the prefrontal cortex--and some of its most primitive, such as the seat of emotions, the amygdala.
At Yale University, an undergraduate student asked Marvin Chun, a professor of psychology, cognitive science and neurobiology, whether the combination of computation and functional magnetic resonance imaging (fMRI) might be used to recreate what the mind sees when it looks at a face. That question led Chun, postdoctoral researcher Brice Kuhl and Alan S. Cowen, who is now pursuing an advanced psychology degree at UC Berkeley, to gather six research participants--two women and four men between 18 and 35 years old--for a study. Using fMRI, the team exhaustively recorded their subjects’ brain-activation patterns in response to some 300 faces.
The resulting study was accepted for publication in the journal Neuroimage dated May 15, now in progress.
The 300 “training faces” varied in such features as ethnicity, skin color and facial expression. But all were pictured head-on and the same scale. After the participants scanned the sequence of faces 60 times in an fMRI, the researchers compiled a library of brain activation patterns for each participant. They then used a machine learning algorithm to sift through that library and discern regular patterns of brain activation that corresponded to those faces’ distinguishing facial features.
Then the researchers presented the participants with 30 new faces, and watched as their brains whirred into action on each one. Checking against that participant’s library of responses to training faces, the researchers found that their computer program could reconstruct the face the participant was viewing with reasonable fidelity: gender, age and emotional expression appeared to be reliable, as were features such as skin tone and facial contours. (You can see one such reconstruction here.)
Chun called the result “a form of mind-reading” that captures and reflects the sophistication with which we visually process the human face. And because the program uses patterns of brain signals from regions quite separate from the visual cortex, it could probably reconstruct a face that is not actually seen but is instead remembered or dreamed, Chun said.
“Extending the present methods to reconstruction of offline visual information represents a truly exciting--yet theoretically feasible--avenue for future research,” the authors wrote.
[For the record, March 27, 2014: This post has been altered to correct the spelling of coauthor Alan S. Cowen’s name.]