How does the brain turn millions of electrical impulses into objects we recognize?
That question is at the heart of Doris Tsao’s research.
The Caltech visual neuroscientist uses brain imaging technology, electrical recording techniques and mathematical modeling in her search for answers. That quest got a boost Thursday as Tsao was named to the 2018 class of MacArthur fellows.
”I study how we see — how the brain sees — and this is important because it’s a way of understanding how the brain works,” said Tsao, who was born in China and raised in College Park, Md. “And the brain, of course, is who we are.”
Tsao, 42, has explored several aspects of visual processing, including the perception of depth and color. But her most notable line of research seeks to uncover the fundamental neural principles that underlie one of the brain’s most highly specialized and socially important tasks: recognizing a face.
The so-called genius award provides fellows with $625,000 to spend over five years, with no strings attached. Tsao spoke with the Los Angeles Times about the work that caught the MacArthur Foundation’s attention.
What inspired you to study the way we recognize faces?
I first became interested in vision in sixth grade, when I woke up one morning and wondered whether space is infinite or not. This is a problem that’s been tackled by philosophers, physicists and vision scientists.
The way vision scientists think about this problem is: “How does the electrical activity of neurons in the brain construct our subjective perception of space?”
From space, I became interested in how 3-D objects are perceived, and this led to my current interest in object recognition.
And faces in particular?
Yes, indeed. This ability to recognize faces is remarkable.
Is this something everyone is good at?
There are certain people called prosopagnosics who are actually extremely bad at recognizing faces. One guy with this condition commented that he didn’t understand why, in the movies, the robbers always cover their faces. To him, it was as incomprehensible as covering the arm.
What’s it like to be prosopagnosic?
There’s a wonderful illusion that you can try on yourself.
Look at these faces. They look pretty similar, right?
Now flip the images upside down and look at them again. They look extremely different, even though the pixels are exactly the same.
What’s going on?
The part of your brain that codes facial identity is specialized for upright faces. So looking at faces upside down mimics what happens when you have a lesion to the “face area” in your brain. And this area is what makes us so good at recognizing faces.
When it comes to recognizing faces, how do humans stack up against other primates?
The jury is still out. Even among humans, there’s a lot of variation due to experience.
How can you tell what’s going on inside the brain?
There was evidence from humans for the existence of an area dedicated to processing faces. This area was discovered using a technique called fMRI [or functional magnetic resonance imaging] that measures blood flow in the brain.
I decided to try to find a similar area in monkeys so that I could then study it with electrophysiology, a much more sensitive technique that lets us pick up electrical signals from single neurons.
If you figure out how the brain recognize faces, what else will it tell you?
We can learn a lot about how any object is represented from understanding how the brain represents faces.
Faces are like a Rosetta stone. The brain uses a certain set of rules to convert different face images into patterns of electrical activity. It turns out that these rules look very similar to those that govern how we recognize other objects.
How do you know?
We’ve found that, in macaque monkeys, a region called the inferior temporal cortex — part of the visual cortex — has six distinct but densely interconnected “patches” of cells that become highly active when those monkeys are assessing the faces of other macaques. We call those “face patches.”
Individual cells in these regions act like rulers, measuring faces along different axes of a large “face space.”
Face space is similar to our familiar 3-D space, but each point represents a face rather than a spatial location, and it has much more than just three dimensions — probably at least 100.
For our familiar 3-D space, any point can be described by 3 coordinates (x,y,z). For a 100-dimensional face space, any point can be described by 100 coordinates. Each cell in a face patch is essentially measuring one of these 100 coordinates. Together, they can determine the exact face being seen.
How do you test something like this?
We learned, for each cell, what coordinate of face space it was in charge of measuring. Then we showed an animal a mystery face (that we weren’t allowed to peek at).
Just using the electrical responses of the face cells, we could reconstruct the face the animal was seeing. It turned out to match exactly the face it was actually seeing.
Do we store memories of faces using the same kinds of templates that we use to recognize them?
We are studying this process. What’s very cool is that the brain does face recognition in two separate steps. First it computes a code for every face, including unfamiliar ones. Then it compares these codes to memories.
One of the joys of studying the brain is that I get to actually peek inside the carpenter shop and see each step that the brain is using to construct our perception of reality.
There’s a difference between recognizing a face and being able to conjure a face from the past. What’s that about?
I wish I knew.
This interview has been edited for length and clarity.