Advertisement

Science moves one step closer to reading minds

Share
Chicago Tribune

A research team has managed to crack the mind’s internal code and deduce what a person is looking at based solely on brain activity, a feat that could pave the way for what the scientists described as “a brain-reading device.” The ability to read minds reliably is still beyond the grasp of science, but the study published last month by neuroscientists at UC Berkeley builds on a growing body of work on how to hack into the brain’s inner language.

The Berkeley team, which published its study online in the journal Nature, used a brain scan to find patterns of activity when people looked at black-and-white images of ordinary items such as bales of hay, a starfish or a sports car. When the people then looked at different photos, a software program drew on activity in the brain’s vision center to guess which images they saw with, up to 92% accuracy.

Other researchers have stolen glances at people’s secret intentions and memories, and the new findings suggest that brain scanners could even reveal the elusive content of dreams.

Advertisement

Such abilities could have positive uses, such as aiding communication for people who are paralyzed or disabled. But some applications could be questionable, such as extracting information from unwilling subjects. Experts said the work’s ethical implications should be examined now, while the field is young.

The deepest problem facing scientists working to understand the brain is how its billions of neurons work together to make our inner life of sensations, ideas and recollections.

The Berkeley group, led by professor Jack Gallant and graduate student Kendrick Kay, did not solve that enduring puzzle. But by using brute computing force, they showed how the raw noise of neurons firing could be linked with specific visual images.

“The finding is very important,” said John-Dylan Haynes, a professor at the Bernstein Center for computational neuroscience in Berlin. “This is a very sophisticated way of getting around a problem that seems almost impossible to solve.”

The researchers fine-tuned their computer model by showing 1,750 images to each of the subjects, who were actually Kay and study co-author Thomas Naselaris.

Kay said the researchers used their own brain scans because they knew they would be patient subjects and said there was no way they could have manipulated the outcome.

Advertisement

When asked via e-mail how it felt to be in a brain scan machine that might read his thoughts, Kay replied: “To me it’s just data -- I often forget that it is actually my brain activity!”

In the study’s second stage, the subjects looked at 120 new photos while in the brain scan machine. The computer program also analyzed the new photos and predicted how a human brain would respond to them. The program then used its predictions to match actual brain scans with the photos the people were looking at. That technique was 92% accurate for one subject and 72% correct for the other one.

The scientists’ model still cannot reconstruct from scratch what a person is seeing or imagining. For now, the computer can only work from a well-defined set of images to identify which one a person is looking at. It’s an impressive advance, but experts said the technique’s uses would be limited in the short-term.

In addition, the type of brain scan the group used is far too sluggish to capture a person’s response to fast-moving images, Kay said. But in theory, advances in computer programs and brain scans could allow scientists to record the narrative of dreams or let people communicate through pure imagery.

One shortcoming of the Berkeley study is that the accuracy plummeted when the computer program tried to guess in real time which photo a subject was looking at, said Julius Dewald, a neurophysiology expert at Northwestern University.

The program made its best guesses when it could analyze everything after the fact, using an average of how the subject’s brain looked during multiple views of the same photo.

Advertisement

“That would never be much help in the real world, but it’s a nice proof of concept,” Dewald said.

The study’s authors estimate it could be 30 to 50 years before techniques for decoding brain activity are advanced enough to raise urgent ethical issues. But Haynes of the Bernstein Center said the technology had the potential to be so intrusive that scientists should set ethical ground rules now.

Haynes’ group published a study last year in which brain activity revealed whether a person had chosen to add or subtract two numbers. He said it suggests that future methods could delve further into a person’s private plans.

“People should know that there is an ethical dilemma here, and scientists are already thinking about it,” Haynes said.

Advertisement