Artificial intelligence helps shed light on how people’s brains – bodies and emotions react to listening to music


Your heart beats faster, palms sweat and part of your brain called the Heschl’s gyrus lights up like a Christmas tree.

Chances are, you’ve never thought about what happens to your brain and body when you listen to music in such a detailed way.

But it’s a question that has puzzled scientists for decades: Why does something as abstract as music provoke such a consistent response?

In a new study, a team of USC researchers, with the help of artificial intelligence, investigated how music affects listeners’ brains, bodies and emotions.

The research team looked at heart rate, galvanic skin response (or sweat gland activity), brain activity and subjective feelings of happiness and sadness in a group of volunteers as they listened to three pieces of unfamiliar music.

Of the 74 musical features examined, the researchers found dynamics, register, rhythm and harmony were particularly helpful in predicting listeners’ response.

“Taking a holistic view of music perception, using all different kinds of musical predictors, gives us an unprecedented insight into how our bodies and brains respond to music,” said the study’s lead author Tim Greer, a computer science PhD student and a member of the USC Signal Analysis and Interpretation Laboratory (SAIL).

Contrast is crucial

Amongst their findings, the researchers noted that music powerfully influenced parts of the brain in the auditory complex called the Heschls’ gyrus and the superior temporal gyrus.

Specifically, the brain responded to pulse clarity, or the strength of the beat (put simply: your gyri will be looking lively when listening to Lady Gaga’s Bad Romance).

They also found that changing dynamics, rhythm and timbre, or the introduction of new instruments, brings about the uptick in response.

In other words, contrast is crucial. For instance, the gyri activate when there is a change in dynamics, or “loudness.”

“If a song is loud throughout, there’s not a lot of dynamic variability, and the experience will not be as powerful as if the composer uses a change in loudness,” said Greer, himself a composer who plays sax and keyboard.

“It’s the songwriter’s job to take you on a rollercoaster of emotions in under three minutes, and dynamic variability is one of the ways this is achieved.”

So, if you’re listening to a whole album of black metal, which is consistently loud, you’re probably not going to see a response. But if you’re listening to Nirvana’s Smells Like Teen Spirit, which goes from a quiet verse to loud chorus and back again, it’s a different story.

The team also discovered that galvanic skin response–basically, a measure of sweat — increased after the entrance of a new instrument or start of a musical crescendo.

“When each new instrument enters, you can see a spike in the collective response of the skin,” said Greer.

In addition, the most stimulating moments in music were preceded by an increase in the complexity level of the song.

In essence, the more instruments are in the song, the more people responded.

(Think: the first section of Mike Oldfield’s Tubular Bells, as the song builds to a crescendo by adding more instruments.)

And the saddest note of all? That award goes to the raised 7th note of the minor scale. The study found the note F# in a song in G minor key of G minor positively correlated with high sadness ratings.

That might be the reason the narrator’s anguish is almost palpable in The Animals’ House of the Rising Sun, which uses the raised 7th of the minor scale to launch into each increasingly emotional verse.

New territory

For this experiment, the team selected three emotional pieces of music that did not contain lyrics and were not highly familiar, so no element of memory was attached to the listeners’ emotional response. (Hearing a song that played in the background during a wisdom tooth extraction, for instance, might skew your perception.)

In the neuroimaging experiment, 40 volunteers listened to a series of sad or happy musical excerpts, while their brains were scanned using MRI.

This was conducted at USC’s Brain and Creativity Institute by Assal Habibi, an assistant professor of psychology at USC Dornsife College of Letters, Arts and Sciences, and her team, including Matthew Sachs, a postdoctoral scholar currently at Columbia University.

To measure physical reaction, 60 people listened to music on headphones, while their heart activity and skin conductance were measured.

The same group also rated the intensity of emotion (happy or sad) from 1 to 10 while listening to the music.

Then, the computer scientists crunched the data using AI algorithms to determine which auditory features people responded to consistently.

In the past, neuroscientists trying to better understand the impact of music on the body, brain and emotions have analyzed MRI brain scans over very short segments of time–for instance, looking at the brain reacting to two seconds of music.

By contrast, in this study, using algorithms to analyze data gathered in the lab, the scientists were able to look at how people felt while listening to music over longer periods of time, not only from brain scans, but also combining data from other modes.

In addition to helping researchers identify songs for the perfect workout, study or sleep playlist, the research has therapeutic applications — music has been shown to calm anxiety, ease pain and help people with disabilities or dementia.

“Novel multimodal computing approaches help not just illuminate human affective experiences to music at the brain and body level, but in connecting them to how actually individuals feel and articulate their experiences,” said Professor Shrikanth (Shri) Narayanan, study co-author, Niki and C. L. Max Nikias Chair in Engineering and professor of electrical and computer engineering and computer science.

Feeling good

In addition to helping researchers identify songs for the perfect workout, study or sleep playlist, the research has therapeutic applications — music has been shown to calm anxiety, ease pain and help people with disabilities or dementia.

“From a therapy perspective, music is a really good tool to induce emotion and engage a better mood,” said Habibi.

“Using this research, we can design musical stimuli for therapy in depression and other mood disorders. It also helps us understand how emotions are processed in the brain.”

According to the researchers, future studies could look at how different types of music can positively manipulate our emotional responses and whether the intent of the composer matches the listener’s perception of a piece of music.

That music listening can have strong emotional effects is widely accepted, and it is also known that the effects depend on several individual and environmental factors.

There are several elements of the musical experience that influence the emotional response of the listener, of which we focus on four that have been explored in the scientific literature. Firstly, the type of music that refers to the structural features of the music (for instance, music that is assumed to be calming or stimulative) is one factor that may obviously be of importance.

Secondly, individual differences—even when subjects choose their preferred type of music with its specified characteristics (stimulating or calming respectively) the emotional responses elicited may vary considerably between individuals according to, for example, personality and the social or cultural context in which they reside [1].

Despite that fact, overall, it has been noted that stimulative self-selected music tends to have a strong association with joy and energy, and the calming music in general has an anxiety reducing effect.

Thirdly, whether an individual listens to recorded or live music, it has been shown to be of significance for the emotional response to music [2].

The fourth and final factor we wish to highlight is listener familiarity with the piece of music that they listen to—if there are strong prior emotional experiences attached to a specific piece of music, the emotional effects may be amplified as compared to those experienced by other people.

For instance, some individuals may respond with strong anxiety to a piece of music regarded as calming by most listeners and react with joy to a piece regarded as sad by other people. There is extensive literature on these factors, as summarized for instance by Juslin and Sloboda [3].

The patterns of emotional reactions during music listening are admittedly very complex, as discussed by Salimpoor et al. [4].

Responses related to arousal are related to basic elements of the music, such as volume, timbre, pitch, and tempo [5].

Unexpected elements in the music may give rise to arousal as well as to joyful, sad, or worried reactions if the emotional message is in conflict with the emotional state that the person happens to be in when the listening starts [6].

The aim of the present study is to assess emotional responses to music across three age groups, drawing on contrasting music listening situations.

The design of the study mobilises a simple paradigm mainly based on James Russell’s elaborated version of his original circumplex model of affect [7], which has been used quite extensively in the music and emotional literature by Schubert [8], Eerola and Vuoskoski [9], and Schimmack and Grob [10], among others. Russell’s model is composed of two axes: valence (unpleasant to pleasant) and arousal (deactivation to activation), and its framework describes how the affective concepts of pleasure, excitement, arousal, stressful, displeasure, depression, sleepiness, and relaxation fit into this model. This model of emotional axis thinking has also been linked in further work to physiological theory [11].

Although many experiments have been performed in the past on emotional effects of music listening, no efforts have been made in the direction of standardizing a simple measurement tool for the most crucial emotional reactions.

In the present study we test such a tool in contrasting situations—in the class room versus in the lecture or concert room and with widely different audiences in different ages ranging from school children to elderly subjects—and with large differences in experience with the kind of music that they were exposed to.

The only factor that was kept constant throughout the experiments was genre. Only classical music was played. We predict that emotional responses to the musical pieces will be widespread across age groups, degree of musical background, familiarity with specific pieces of music, and listening conditions (live or recorded).

Media Contacts:
Amy Blumenthal – USC
Image Source:
The image is in the public domain.

Original Research: The paper, titled “A Multimodal View into Music’s Effect on Human Neural, Physiological, and Emotional Experience,” was presented at ACM Multimedia, Oct. 22. The research team also includes Ben Ma, a USC Viterbi School of Engineering undergraduate student in computer science and a member of USC SAIL.


Please enter your comment!
Please enter your name here

Questo sito usa Akismet per ridurre lo spam. Scopri come i tuoi dati vengono elaborati.