This is your brain on stories. By tracking the blood flow in people’s brains as they listened to a storytelling radio show, scientists at UC Berkeley have mapped out where the meanings associated with basic words are encoded in the cortex, creating the first semantic atlas of the brain.
The findings, described in the journal Nature, provide an unprecedented view of language and meaning as it plays out on our neural terrain, and could potentially offer a road map for those looking to help patients with certain types of aphasia or other neurological disorders.
For a long time, researchers thought about language as a primarily left-hemisphere function that took place in specific spots of the brain, such as Broca’s area and Wernicke’s area. But those areas aren’t associated with understanding language but producing it -- speech, in short.
The researchers had seven test subjects (including the lead author, Alexander Huth) lie in a functional MRI machine while listening to more than two hours’ worth of stories from the “Moth Radio Hour,” a public radio show in which people tell funny, sad or otherwise poignant autobiographical stories.
FOR THE RECORD
April 28, 8:14 a.m.: A previous version of this article stated that the “Moth Radio Hour” was a production of Public Radio International. It is distributed by the Public Radio Exchange.
“Our subjects love to be in this experiment because they can just lie there and listen to these really interesting stories,” Gallant said. “It’s a million times better than any other experiment we’ve ever done.”
Of course, it’s hard not to laugh at a funny story. And because movement can destroy fMRI data, they 3-D-printed personalized “head cases” for each subject that would keep their heads very stable.
The researchers then used natural language processing-related programs to extract meaning from common words in the stories. They compared the time-coded transcripts to the fMRI data, which had painstakingly mapped the blood flow in about 50,000 different locations in the brain.
As it turns out, different regions in the brain responded to different families of related concepts. Dog, hypothetically speaking, might be associated with animals like “wolf,” or areas related to how they look or smell, or (if you had a dog as a kid) perhaps in areas related to words like “home.”
“Each semantic concept is represented in multiple locations in the brain,” Gallant said. “And each brain location represents sort of a family of selected concepts. So ... there’s a bunch of regions in the brain that respond to dogs.”
The map sheds light on the ways we process meaning through language, coloring parts of the brain in different shades depending on what kind of information they encode. Red, for example, has to do with certain social concepts, while green spots pertain to visual and tactile concepts. The model is available online, and Gallant says he hopes the work will become a handy resource for other researchers.
“We’re trying to build an atlas just like a world atlas,” he explained. “If I give you a globe, you can do anything with it – you could look at how big the ocean is or what the highest mountain is or what the distance from New York to California is.”
But is this shared semantic structure innate, or the result of environmental influences? After all, these test subjects shared both language and culture, which could potentially account for some of these similarities.
Gallant isn’t sure whether that pattern will hold across people who speak a language very different from English, such as Mandarin Chinese or Japanese, or across bilingual people responding to their second language. But these are questions that the scientist said were well worth exploring.
“The first law of neuroscience is ‘the brain is complicated,’” he said.