Advertisement

Study: Music, language’s common evolutionary roots lie in emotion

Share

Some folks who are “tone-deaf” musically might be a little hard-of-hearing emotionally too. A new study of people with amusia – the inability to properly process music – revives an old Darwinian hypothesis linking music and language to a common evolutionary origin.

According to an idea put forth by naturalist Charles Darwin, before humans had either language or music, they had a “musical protolanguage” useful during courtship, fighting over territory and in expressing emotion (which later researchers considered particularly crucial for parent-infant bonding).

Previous work has shown that people with amusia don’t have much trouble decoding the meaning held in elements beyond the words on a page, like rhythm, stress and intonation. But an international trio of scientists decided to focus on the more subtle changes in pitch that reveal the emotions behind a person’s words.

Advertisement

The evidence appears to back their theory up. People with amusia are less likely to say they like or love music, and are less likely to report emotional changes from listening to music. And neuroscience research has indicated that, in both music and in speech, acoustic cues for emotion are picked up by the same areas of the brain.

For the paper published in this week’s Proceedings of the National Academy of Sciences, the researchers tested two-dozen volunteers, half with and half without amusia, to see how well they could identify six different emotions in the tones of 96 spoken phrases: happy, tender, afraid, irritated, sad and no emotion.

Depending on the emotion, the participants with amusia fared up to 20% worse than their peers. The gap was wider for some emotions (happy, tender, sad) and negligible for others (fear, no emotion).

The dozen listeners with amusia were also more likely to have trouble distinguishing between very different pairs of emotions – happy versus irritated, for example, or sad versus tender. As it turns out, these pairs sound similar in terms of intensity and duration. (Happy or irritated phrases were spoken more quickly and were higher in intensity, while sad or tender phrases were spoken more slowly and were lower in intensity.)

What’s more, “amusic” participants were more likely to report that they had trouble figuring out people’s emotional states from speaking on the phone, and more likely to say they relied on facial cues and gestures to help them figure out what a person was feeling.

The findings, the authors write, “lend support to early speculations by Darwin, elaborated upon by several contemporary theorists, that emotional communication is a fundamental link between these domains and reflects their common evolutionary origin.”

Advertisement

Follow me on Twitter @aminawrite.

Advertisement