When listening to a song or watching a dance, humans tend to follow the rhythm of the music.
This is because one fundamental aspect of music is its rhythm, the way we synchronize with the temporal regularities of a melody or a dance.
A recent study explores how our brain fuses with musical rhythm and the extent to which humans share this ability with other animals.
Alexandre Celma-Miralles and Juan Manuel Toro, an ICREA research professor with the Department of Information and Communication Technologies (DTIC), and members of the Comparative Cognition and Language (LCC) research group at the Center for Brain and Cognition (CBC) at UPF, explain this peculiarity in an article published this November in the journal Brain and Cognition.
“This study explores the relationship between the rhythmic structure of music and the spatial dimension of sound.
We study how the brain interacts with sounds that are spatially separate to build up a metrical structure”, explain Celma-Miralles and Toro, the authors of the study.
So, they compared the neural responses of professional musicians with those of untrained listeners while both groups listened to a waltz.
In one of the study experiments, the participants had to pay attention to sounds defined by their spatial position (the sounds were separated in space).
In another experiment the participants had to pay attention to a visual distraction. Data for the study were obtained from the frequencies of EEG recordings of each subject.
Rhythm and beat are enhanced by experience
The researchers found that regardless of the participant’s musical training, the brains of all listeners synchronized with the rhythm.
The results also showed that musicians’ neuronal responses were much stronger and more resistant to distractions than those of non-musicians.
That is, the study revealed that training facilitates rhythmic synchronization.
Schematic representation of participant listening to the experimental conditions.
In the control condition (a), the isochronous beat was always presented at 0° (in front of the participant).
In the spatial conditions, such as the Spatial 60° (b), the isochronous beat alternated at symmetrical angular positions: the first sound was presented at one side and the two following sounds at the contralateral side, thus following a ternary meter pattern defined over spatial cues. The image is credited to UPF.
As the researchers state, “the most relevant point of this study is that it demonstrates that our brains are prepared to follow rhythm, regardless of whether we listen to a song or watch a dance”.
This reinforces the idea that the neuronal processing of rhythm and beat is facilitated by previous experience with rhythmic events during long periods of formal musical training.
One of the most basic and universal responses a person can have to music is engagement. When listeners are engaged with music, they follow the sounds closely, connecting in an affective, invested way to what they hear.
Despite the importance of this engagement, it has been difficult to study given the limits of self-report and physiological measures, especially in a domain like classical music where listeners are accustomed to sitting quietly without providing overt evidence of their internal experience.
A domain with a similar problem is film, where viewers might sustain powerfully involving experiences, but remain motionless and silent in their seat.
Dmochowski et al. showed that inter-subject correlation in neural activity as measured by EEG corresponds with arousing moments–such as a close-up of a weapon–within rich cinematic narratives, but not within casual footage of everyday life1.
When participants engage with a film–where ‘engagement’ is defined behaviorally as a commitment to watch the film2–their neural responses often track occurrences in predictable ways that are shared from viewer to viewer.
Thus, previous work has shown that the level of inter-subject correlation is correlated with audience retention assessed behaviorally. This work validated ISC as a neural predictor of how well a film can capture its audience, in a fairly literal sense2.
Similarly, synchrony between classroom students’ brains predicts classroom engagement3. Thus, inter-subject correlation of neural responses is also potentially well-suited for measuring musical engagement4, despite that the relationship between engagement and ISC is not perfect (a period with low ISC, for example, might suggest either that listeners are unengaged or that they are engaged by diverse parts of the stimulus).
By exposing people to excerpts of instrumental music and tracking the inter-subject correlation of their neural response, this paper aims to assess engagement implicitly.
We postulate that inter-subject correlation is also a good predictor for musical engagement. While we will not test this explicitly, we will perform a number of manipulations that are expected to affect music engagement and we will track how these affect inter-subject correlation of the EEG.
One characteristic of music that distinguishes it from film is the prevalence of repetition–not only do individual songs feature copious repetition (a chorus that recurs again and again, for example), but also listeners tend to play and replay their favorite tracks5.
When watching film, inter-subject correlation decreases when participants watch a clip the second time, suggesting that the film has become less engaging or the audience’s attention has started to wander6.
Because repetition plays such a significant role in music, this paper also aims to address whether repeated exposures influence engagement differently for musical examples.
A large body of research in psychoaesthetics traces the inverted-U shaped curve in “hedonic value” (which might be thought about as some kind of composite of enjoyment, interest, and attentiveness) across multiple exposures to a particular stimulus–especially a piece of music7.
As listeners encounter a piece again and again, they tend to enjoy it more and more, until a threshold beyond which liking diminishes with further repetitions8.
This curve is reliably modulated by stimulus complexity. For an intricate, challenging stimulus, it can take more repetitions to reach the peak of the curve, whereas for a simple one, the plateau can arrive more quickly9.
Stimulus complexity is not determined purely by acoustic properties; listener experience also plays a role. For a listener without experience in a given style, an excerpt might seem quite complex, but to a listener well versed in the style–able to chunk and process patterns with ease–the same excerpt might seem simple.
In our study, we theorized that people with formal musical training would likely have more experience with the type of instrumental excerpts used as stimuli, rendering them comparatively less complex. They may also discern more structure in the music and find it comparatively more engaging10.
In order to assess engagement across repeated exposures, this paper uses excerpts of instrumental classical music. Since good theories of moment-to-moment engagement exist for this repertoire, this choice of stimuli allows an assessment of the degree to which inter-subject correlation serves as a useful measure.
Moreover, since the music involves only instruments rather than singing, there is no linguistic content to influence the responses. Participants listened multiple times to each excerpt in immediate succession. Half of the excerpts were composed in the common-practice style ubiquitous in concert halls and media soundtracks.
Half of the excerpts used musical materials that are less prevalent. When listeners have experience with a particular style, they can parse the music more easily, rendering it simpler. When they lack such experience, however, the music can seem more complex and difficult11. In keeping with previous research on the inverted-U response, engagement (and thus inter-subject correlation) might increase across repetitions of music in an unfamiliar style, and decrease across repetitions of music in a familiar style, according to where the piece starts on this inverted-U.
We present evidence that ISC may be capable of tracking musical engagement without behavioral reports, measuring stimulus processing implicitly using EEG. By examining the degree to which participants’ responses match each other, we aim to directly measure how the music “grips” the listener’s brain. When their neural responses proceed in sync, the music is driving their experience.
Because the inter-subject correlation of EEG signals picks up very rapid responses to the music, on the time scale of a second or less, it is unlikely to track explicit cogitation and more likely to track responses that are stimulus-driven. Given that more high-level and explicit responses likely diverge widely from person to person, other methods are needed to assess these aspects of musical responses.
Since all participants share exposure to the same time-locked stimulus, synced neural responses likely proceed from the influence of this stimulus, whereas divergent responses likely proceed from more idiosyncratic factors.
This method should allow future research to probe the relationship between musical structure and musical engagement, or the influence of various types of extramusical factors on musical engagement. For example, can an intimate performance setting or a pre-concert talk increase musical engagement?
In this study, inter-subject correlation increased when participants listened to music composed in a familiar style compared to music composed in an unfamiliar style. Prior experience with a style shapes listener expectations and provides an entry point for engagement even on the first hearing18. Music written in a less familiar style cannot captivate attention as easily or uniformly19.
Conceptualized in terms of the inverted U-shaped response so prevalent in psychoaesthetics8, familiar music can come in closer to the peak on the first hearing, while unfamiliar music can require more exposure to engage listeners. Indeed, as participants listened and relistened to music written in a familiar style, engagement decreased, but this drop did not occur for music written in an unfamiliar style.
Formal musical training modulated these effects. The decrease across repetitions of familiar music was most pronounced for listeners with formal musical training. Their experience with similar music likely made these excerpts sound quite simple to them, precipitating a rapid drop off in engagement with repeated exposures. Unfamiliar music, on the other hand, continued to maintain their attention, offering enough novelty to sustain interest.
Important caveats temper our conclusions. Musical familiarity varies from person to person. We aimed to address this issue by selected musical excerpts based on pilot ratings of style familiarity for each excerpt.
Group judgements of familiarity by participants in Experiment 2 and 3 confirmed this categorization for participants in those studies, but individual differences might still have affected results. Additionally, the link between ISC and engagement requires further examination.
Although the relationship between ISC and engagement has been thoroughly validated for film2, we did not directly test it for music. Yet the present results are consistent with it.
For one thing, we replicated the attention modulation (attend/distract) manipulation of Ki, Kelly and Parra6 for these musical excerpts. For another, the pattern of ISC changes seems consistent with the predicted trajectory of engagement across repeated exposure.
But ultimately the caveat remains, and an appropriate behavioral metric of engagement is needed for validation—potentially listening duration as used in Cohen, Henin and Parra2.
The persistence in engagement as measured by ISC across repetitions for unfamiliar style music is unique compared to all other domains for which engagement has been tested using this method. Typically, ISC drops quite consistently across repetitions6.
Music’s ability under some circumstances to hold attention and engagement across repeats is consistent with theories about some of the domain-specific roles repetition plays in music (Margulis)5.
We interpret the slope of ISC obtained on trained participants as the persistence of interest in the music. Given this interpretation, listening to the pieces in the experiment in order of the persistence of interest (slope of ISC), suggests that excerpts with a steeper decline in ISC across exposures may have featured more internal repetition and predictable patterning than excerpts that lacked such a decline (see Table 1, which lists the slope of ISC across repetitions with pieces sorted by increasing slope, from decreasing to increasing engagement over repetitions).
The excerpt that maintained the most engagement across the three hearings was the one by Thomas Adès, which maintains a loud dynamic level throughout and features the almost consistent introduction of new instruments, timbres, and flourishes as the passage progresses, avoiding the predictability engendered by many of the other excerpts.
The excerpt that experienced the steepest drop-off in engagement from first to second hearing was the one by Rossini, which largely features a similarly high volume level, but differs in the iterative use of numerous patterns, rendering the excerpt much more predictable (see Table 1). Novelty in particular may explain the effect of dropping ISC, because novelty is known to drive EEG evoked activity, e.g. P300, mismatch negativity, or error related potentials18,19.
It may be that as subjects hear the pieces again and again, the surprise effect vanishes, and ISC drops with the diminishing evoked responses. Future research could test this hypothesis more directly with the paradigm presented here.
This paper suggests a new methodology for tracking musical engagement via EEG. It also presents neuroscientific evidence to bolster theories in psychoaesthetics that arose in the 1970s before it was possible to use techniques other than behavioral to investigate them. Future research could harness the potential of measuring ISC to reveal more about how music captivates the mind.
Press Office – UPF Barcelona
The image is credited to UPF.