“Don’t hold back, sing with all of your heart,” said our colleague Simon Baron-Cohen on a Zoom meeting the other night with his fellow band members.
Simon is director of the Autism Research Centre at Cambridge University by day and bass player of the blues and funk group Deep Blue by night. His band and many others are taking to the Zoom airways to play music together.
One of the most encouraging phenomena we have begun to see in response to social distancing laws are the innovative ways that people are starting to bond with each other, particularly musically.
At the start of the lockdown in Italy, videos went viral on social media of neighbours singing with each other across their balconies. This trend also happened in Israel, Spain, Iraq, the US, France, Lebanon, India, Germany and other countries. And it wasn’t just balconies. People went to their rooftops, windows, and even online.
This need to bond – through music especially – relates to the fundamental features of being human. In some ways, amid the horrors of the COVID-19 pandemic, we are experiencing a global social psychological experiment that is giving us insight into what lies at the core of our humanity.
We are innately social creatures. In fact, some scholars have argued that, on a biological level, the social brain in humans is more developed than that of any other species on earth. As such, we humans have a biological need to form bonds and cooperate with one another.
This is evident in the physiological and psychological stress we experience when we are isolated, which increases our drive to connect with others – something we are witnessing in societies around the world.
Simply put, the social brain needs to be fed and, if forced into isolation, will adapt to find ways to connect.
What is interesting is that simply messaging each other or making phone calls doesn’t seem to do the trick. Even face-to-face video conferencing hasn’t been enough for many. We need to connect in a way that the social brain will resonate with on an emotional level.
This is where music comes in. We are all familiar with the phrase “music is food for the soul”, but it is also true that “music is food for the brain”.
Research shows that when we sing together, our social brains are activated to produce oxytocin.
This is a brain hormone closely linked to the way humans socialise with each other. It is released when we form social bonds, when we are synchronised with each other during face-to-face interactions, and when we are intimate with others, which is why some refer to it as the “cuddle” or “love” hormone.
Another study in 2017 by T Moritz Schladt and colleagues showed that oxytocin increased during improvisational singing with others.
But it isn’t just singing that increases oxytocin. A 2017 study by Yuuki Oishi and colleagues showed that oxytocin increases after just listening to music. And not only that, it increases when listening to both slow and fast musical tempos.
What makes us human
All of this points to why, on a biological level, music is part of what makes us human.
Everyone is different and there is music to meet everyone’s tastes, which is why we run a project called Musical Universe in which people can take tests and find out how their unique musical preferences links to their brain type and personality.
But whatever your specific tastes, music plays an important role in connecting with others in lockdown. That’s why group singing sessions have sprouted across courtyards and via video conferencing platforms during the pandemic. And why we see Elton John, Alicia Keys, Chris Martin of Coldplay and many others live streaming concerts from their homes for the world to partake in.See also
Music dates back at least 40,000 years in human history. Evolutionary theories about the origins of music are many, but most emphasise its social role. This includes strengthening group cohesion in hunter-gatherer times and as a way of signalling shared values and strength within and between tribal groups.
Even Charles Darwin contemplated the origins of music, and argued that it may have played a part in sexual selection. He suggested that courtship songs might have signalled attractive and evolutionary adaptive traits to potential partners.
Today, while we face a global crisis, music shows no signs of slowing down, even in forced isolation. Music lies at the very essence of our humanity because it enables the level of social bonding that distinguishes us from other species. From lullabies sung from a parent to their infant, to mass jam sessions online, we can all turn to song to maintain our sanity, our hope, and our empathy toward one another.
Funding: David M. Greenberg receives funding from the Zuckerman STEM Leadership Program for his postdoctoral studies at Bar-Ilan University. He is also founder of the Musical Universe project.
Ilanit Gordon receives funding from Israel Science Foundation; Negotiation and Team Resources under the NTR-INGroup research grant program.
Music is a prominent feature of everyday life and a cultural universal [1, 2]. Each day we come across music of varying styles and characteristics, and we continually make judgments about whether or not we like the music we hear.
When listening to a new song, it takes us just a few seconds to decide whether to press repeat, change to the next tune, or to buy it. However, little is known about what determines our taste in music. We address this gap in the literature by examining the cognitive and affective underpinnings of musical preferences.
Research over the past decade has argued that musical preferences reflect explicit characteristics such as age, personality, and values [3–6]. Indeed, findings across studies and geographic regions have converged to show that the Big Five personality traits are consistently linked to preferences [6–12].
For example, people who are open to new experiences tend to prefer music from the blues, jazz, classical, and folk genres, and people who are extraverted and agreeable tend to prefer music from the pop, soundtrack, religious, soul, funk, electronic, and dance genres .
Though these findings are consistent across studies, what is also consistent is that the results have small effect sizes (r < .30) when compared to benchmarks used in other psychological research .
This raises the question of whether there are additional psychological mechanisms that might account for individual differences in musical preferences. In this article we build on previous research by examining two dimensions that may be linked musical preferences: empathy and systemizing.
Empathizing-Systemizing (E-S) Theory and Music Research
Music listening involves a range of abilities. These include: perceptual processing: taking in and making sense of audio and visual content in music [15, 16]; affective reactivity: reacting emotionally and physiologically to it [17–19]; intellectual interpretation: interpreting how the detailed emotional and sonic elements in the music relate to the whole ; and prediction: anticipating the expected direction of the music (e.g. the melody or narrative) and predicting the thoughts and feelings of the musician [21–24].
These musical abilities may overlap with the drives to empathize and systemize. Empathy is the ability to identify, predict, and respond appropriately to the mental states of others [25, 26].
People use empathy when perceiving musical content, reacting emotionally and physiologically to it, and while performing [27–32]. Systemizing is the ability to identify, predict, and respond to the behavior of systems by analyzing the rules that govern them . These include systems that are natural (e.g. the weather), abstract (e.g. mathematics), organizational (e.g. classifications), and technical (e.g. a mechanical motor) [34, 35].
People are likely to systemize when perceiving and interpreting musical content, particularly when analyzing and deconstructing its sonic features and interpreting how the detailed elements in a musical piece relate to the whole .
Even though research into music and empathy has increased, there remains very little empirical research into systemizing and music . This is surprising given that there is evidence that empathy and systemizing are not entirely independent of each other [33, 38–40].
Individual differences in empathy can be measured by the Empathy Quotient (EQ)  and systemizing can be measured by the Systemizing Quotient-Revised (SQ-R) , and both have contributed to the empathizing-systemizing (E-S) theory [38–39].
Measurements on these two dimensions reveal a person’s cognitive style (or ‘brain type’). Those who score higher on the EQ than the SQ are classified as ‘type E’ (empathizing), and those who score higher on the SQ than the EQ are classified as ‘type S’ (systemizing).
Individuals with relatively equal scores on both are classified as ‘type B’ (balanced). Research has provided evidence that these two dimensions explain psychological sex differences. More females are classified as type E and more males are classified as type S . Furthermore, scores on the EQ and SQ predict autistic traits as measured by the Autism Spectrum Quotient (AQ) [41, 42].
Those on the autism spectrum are typically classified as type S or ‘extreme type S’ [41, 43, 44]. These brain types have a neurobiological basis [45, 46]. In males for example, systemizing is positively linked to the size of the hypothalamic and ventral basal ganglia brain regions .
There have only been a few studies that have explored how empathy links to musical preferences, and there have been no studies on systemizing. Vuoskoski and colleagues  asked participants (N = 148) to indicate their liking for 16 musical excerpts from film music.
The 16 excerpts were categorized into four groups: sad, happy, scary, and tender. Results showed that empathy was positively correlated to preferences for sad and tender music, and there were no significant correlations for happy or scary music. However, because the excerpts were exclusive to film music, the extent to which these findings generalize beyond the soundtrack genre is not yet known.
In another study, Egermann & McAdams  found that preferences moderated the relationship between empathy and emotion contagion in music, however, they did not examine the direct links between empathy and individual differences in musical preferences.
Therefore, we extend this previous research by using a music-preference model that examines preferences with stimuli representative of the musical variety that people listen to in everyday life, and which also overcomes critical limitations in previous research in the area of musical preferences.
Research into musical preferences has long been hindered by constraints posed by genre-based methodologies. Researchers frequently measure preferences by asking participants to indicate their self-ratings of preferences for a list of genres .
However, genres are artificial labels that have been developed over a period of decades by the record industry, and which contain illusive definitions and social connotations. They can hold different definitions depending on the time period that is in reference.
For example, the ‘jazz’ label can refer to the swing era of the 1930’s and 40’s and the music of Louis Armstrong and Count Basie, but it can also refer to the post-bop and avant-garde era of the 1960’s and 70’s, which featured the music of John Coltrane and Sun Ra. Genres are also umbrella terms that cover a variety of sub-styles.
For example, the ‘rock’ label can refer to ‘soft rock’, such as music by Billy Joel and Elton John, but also ‘hard rock’, such as music by AC/DC and Guns N’ Roses. Therefore, genre-based methodologies that ask participants to indicate their liking for genre labels make it difficult for researchers to accurately capture information about an individual’s preferences.
To address this issue, Rentfrow, Goldberg, & Levitin  measured musical preferences across four independent samples by asking participants to report their preferential reactions to musical stimuli that were representative of a variety of genres and subgenres.
Separately, judges rated these excerpts based on their perceptions of various sonic (e.g. instrumentation, timbre, and tempo) and psychological (e.g. joyful, sad, deep, and sophisticated) attributes in the music.
Findings across all of the samples converged to suggest that a robust five-factor structure underlies musical preferences, and that each of the five dimensions are defined and differentiated by configurations of their perceived musical attributes.
These dimensions (coined the MUSIC model after the first letter of each dimension label) are: Mellow (featuring romantic, relaxing, unaggressive, sad, slow, and quiet attributes; such as in the soft rock, R&B, and adult contemporary genres); Unpretentious (featuring uncomplicated, relaxing, unaggressive, soft, and acoustic attributes; such as in the country, folk, and singer/songwriter genres); Sophisticated (featuring inspiring, intelligent, complex, and dynamic attributes; such as in the classical, operatic, avant-garde, world beat, and traditional jazz genres); Intense (featuring distorted, loud, aggressive, and not relaxing, romantic, nor inspiring attributes; such as in the classic rock, punk, heavy metal, and power pop genres); and Contemporary (featuring percussive, electric, and not sad; such as in the rap, electronica, Latin, acid jazz, and Euro pop genres).
We employ the MUSIC model in the current investigation because of four notable advantages. First, the five factors are recoverable not only across genres but also within. In two independent studies, the MUSIC model was replicated within preferences using music from only a single genre .
It was first replicated among preferences for jazz music, and second within preferences for rock music. Second, the model has ecological validity because the excerpts administered were of studio recorded music, as opposed computer-generated or manipulated music for the purposes of an experiment.
Third, by consulting experts in the field, the musical excerpts were selected via a systematic procedure that aimed to generate a stimulus set that was representative of the large spectrum of musical characteristics and styles that people are exposed to in their everyday lives. Fourth, because each of the excerpts was coded for their sonic and psychological attributes, fine-grained observations about an individual’s musical preferences are able to be made.
The aim of this research was to investigate the cognitive and affective basis of musical preferences by asking people to report their preferential reactions to musical stimuli. To address this aim, we used multiple samples, musical stimuli, and recruitment routes to examine how individual differences in musical preferences are empirically explained by empathizing, systemizing, and cognitive ‘brain types’. The specific aims of this research were:
- To examine if empathizing and systemizing correlates with musical preferences across multiple samples.
- To test if replicated patterns of results emerge for preferences within a single genre of music: first within rock music and second within jazz music.
- To examine how individual differences in musical preferences are differentiated by brain type. Specifically, we examined how preferences for broad musical styles (as outlined by the MUSIC model) are differentiated by brain type E, type B, and type S.
- To examine how preferences for fine-grained features in music (preferences for specific psychological and sonic attributes) vary according to brain type.
- To test the extent to which the findings are independent of sex and personality traits.
Contemporary research into musical preferences has adopted an interactionist approach posting that people seek musical environments that reflect and reinforce their personal characteristics (e.g. personality traits) [4, 6]
Extending this theory to cognitive styles, we predicted that people would prefer music that reflects their empathizing and systemizing tendencies. Because empathizers have a tendency to perceive and react to the emotional and mental states of others, we predicted that empathizers would prefer music that reflects emotional depth.
Elements of emotional depth are often heard in the Mellow and Unpretentious music-preference dimensions, which features soft, gentle, reflective, thoughtful and warm attributes . And because systemizers have a tendency to construct and analyze systems, we predicted that systemizers would prefer music that contains intricate patterns and structures.
These elements are often heard in the Sophisticated music-preference dimension, which features instrumental, complex, and intelligent or cerebral attributes (ibid). Importantly, since systemizers often have lower levels of empathy, we predicted that systemizers are likely to prefer music opposite to that which is featured in the Mellow dimension, including music with strong, tense, and thrilling, and attributes, which is featured in music from the Intense music-preference dimension.
1. Blacking J. (1995). Music, culture, and experience: Selected papers of John Blacking. London: University of Chicago Press. [Google Scholar]
2. DeNora T. (2000). Music in Everyday Life. Cambridge: Cambridge University Press. [Google Scholar]
3. Boer D., Fischer R., Strack M., Bond M. H., Lo E., & Lam J. (2011). How shared preferences in music create bonds between people values as the missing link. Personality and Social Psychology Bulletin, 37(9), 1159–1171. [PubMed] [Google Scholar]
4. Bonneville-Roussy A., Rentfrow P. J., Xu M. K., & Potter J. (2013). Music through the ages: Trends in musical engagement and preferences from adolescence through middle adulthood. Journal of Personality and Social Psychology, 105(4), 703. [PubMed] [Google Scholar]
6. Rentfrow P. J., & Gosling S. D. (2003). The do re mi’s of everyday life: The structure and personality correlates of music preferences. Journal of Personality and Social Psychology, 84, 1236–1256. [PubMed] [Google Scholar]
8. Delsing M. J. M. H, ter Bogt T. F. M., Engels R. C. M. E., & Meeus W. H. J. (2008). Adolescents’ music preferences and personality characteristics. European Journal of Personality, 22, 109–130. [Google Scholar]
9. Dunn P. G., de Ruyter B., & Bouwhuis D. G. (2012). Toward a better understanding of the relation between music preference, listening behavior, and personality. Psychology of Music, 40, 411–428. [Google Scholar]
10. George D., Stickle K., Rachid F., & Wopnford A. (2007). The association between types of music enjoyed and cognitive, behavioral, and personality factors of those who listen. Psychomusicology, 19, 32–56. [Google Scholar]
11. Langmeyer A., Guglhor-Rudan A., & Tarnai C. (2012). What do music preferences reveal about personality. A cross-cultural replication using self-ratings and ratings of music samples. Journal of Individual Differences, 33 ( 2 ), 119–130. [Google Scholar]
12. Zweigenhaft R.L. (2008). A do re mi encore: A closer look at the personality correlates of music preferences. Journal of Individual Differences, 29, 45–55. [Google Scholar]
13. Rentfrow P. J., & McDonald J. A. (2009). Music preferences and personality In Juslin P. N. & Sloboda J. (Eds.), Handbook of music and emotion (pp. 669–695). Oxford, United Kingdom: Oxford University Press. [Google Scholar]
14. Cohen J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Hillsdale, NJ: Erlbaum. [Google Scholar]
15. Juslin P. N., & Lindström E. (2010). Musical expression of emotions: modelling listeners’ judgments of composed and performed features. Music Analysis, 29(1–3), 334–364. [Google Scholar]
17. Lundqvist L.-O., Carlsson F., Hilmersson P. & Juslin P. N. (2009) Emotional responses to music: Experience, expression, and physiology. Psychology of Music, 37, 61–90. [Google Scholar]
18. Rickard N. S. (2004). Intense emotional responses to music: a test of the physiological arousal hypothesis. Psychology of Music, 32, 371–388. [Google Scholar]
20. Gabrielsson A. (2011). Strong experiences with music: Music is much more than just music. Oxford, UK: Oxford University Press. (Original work published in 2008). [Google Scholar]
21. Huron D. B. (2006). Sweet anticipation: Music and the psychology of expectation. MIT press. [Google Scholar]
22. Juslin P. N., & Laukka P. (2003). Communication of emotions in vocal expression and music performance: Different channels, same code?. Psychological bulletin, 129(5), 770. [PubMed] [Google Scholar]
23. Steinbeis N., Koelsch S., & Sloboda J. A. (2006). The role of harmonic expectancy violations in musical emotions: Evidence from subjective, physiological, and neural responses. Journal of Cognitive Neuroscience, 18(8), 1380–1393. [PubMed] [Google Scholar]
24. Trainor L. J., & Zatorre R. J. (2009). The neurobiological basis of musical expectations The Oxford handbook of music psychology, 171–183. [Google Scholar]
25. Harris P., Johnson C. N., Hutton D., Andrews G., & Cooke T. (1989). Young children’s theory of mind and emotion. Cognition and Emotion, 3, 379–400. [Google Scholar]
26. Baron-Cohen S., & Wheelwright S. (2004). The Empathy Quotient: An investigation of adults with Asperger syndrome or high functioning autism, and normal sex differences. Journal of Autism and Developmental Disorders, 34(2), 163–175. [PubMed] [Google Scholar]
27. Egermann H., & McAdams S. (2013). Empathy and Emotional Contagion as a Link Between Recognized and Felt Emotions in Music Listening. Music Perception: An Interdisciplinary Journal, 31(2), 139–156. [Google Scholar]
29. Rabinowitch T. C., Cross I., & Burnard P. (2013). Long-term musical group interaction has a positive influence on empathy in children. Psychology of Music, 41(4), 484–498. [Google Scholar]
31. Vuoskoski J. K., & Eerola T. (2012). Can sad music really make you sad? Indirect measures of affective states induced by music and autobiographical memories. Psychology of Aesthetics, Creativity, and the Arts, 6(3), 204. [Google Scholar]
32. Vuoskoski J. K., Thompson W. F., McIlwain D., & Eerola T. (2012). Who enjoys listening to sad music and why?. Music Perception, 29(3), 311–317. [Google Scholar]
33. Baron-Cohen S., Wheelwright S., Lawson J., Griffith R., Aswhin C., Billington J., et al. (2005). Empathizing and systemizing in autism spectrum conditions In Volkmar F., Klin A., & Paul R. (Eds.), Handbook of autism and pervasive developmental disorders: Vol. 1. Diagnosis, development, neurobiology, and behavior (3rd ed., pp. 628–639). Hoboken, New Jersey: John Wiley and Sons. [Google Scholar]
35. Baron-Cohen S., Richler J., Bisarya D., Gurunathan N., & Wheelwright S. (2003). The Systemizing Quotient: An investigation of adults with Asperger syndrome or high-functioning autism, and normal sex differences. Philosophical Transactions of the Royal Society of London . Series B: Biological Sciences, 358(1430), 361–374. [PMC free article] [PubMed] [Google Scholar]
36. Greenberg D. M., Rentfrow P. J., & Baron-Cohen S. (in press). Can Music Increase Empathy? Interpreting musical experience through the Empathizing-Systemizing (E-S) theory: Implications for autism. Empirical Musicology Review,10(1), 79–94. [Google Scholar]
37. Kreutz G., Schubert E., & Mitchell L. A. (2008). Cognitive styles of music listening. Music Perception, 26(1), 57–73. [Google Scholar]
38. Baron-Cohen S. (2003). Essential difference: Men, women, and the extreme male brain. London: Penguin. [Google Scholar]
41. Wheelwright S., Baron-Cohen S., Goldenfeld N., Delaney J., Fine D., Smith R., et al. (2006). Predicting autism spectrum quotient (AQ) from the systemizing quotient-revised (SQ-R) and empathy quotient (EQ). Brain research, 1079(1), 47–56. [PubMed] [Google Scholar]
42. Baron-Cohen S., Wheelwright S., Skinner R., Martin J., & Clubley E. (2001). The Autism-Spectrum Quotient (AQ): Evidence from Asperger syndrome/high-functioning autism, males and females, scientists and mathematicians. Journal of Autism and Developmental Disorders, 31(1), 5–17. [PubMed] [Google Scholar]
43. Goldenfeld N., Baron-Cohen S., & Wheelwright S. (2005). Empathizing and systemizing in males, females and autism. Clinical Neuropsychiatry, 2(6), 338–345 [Google Scholar]
44. Baron-Cohen S., Cassidy S., Auyeung B., Allison C., Achoukhi M., Robertson, et al. (2014). Attenuation of typical sex differences in 800 adults with autism vs. 3,900 controls. PloS one, 9(7), e102251. [PMC free article] [PubMed] [Google Scholar]
45. Auyeung B., Baron‐Cohen S., Ashwin E., Knickmeyer R., Taylor K., & Hackett G. (2009). Fetal testosterone and autistic traits. British Journal of Psychology, 100(1), 1–22. [PubMed] [Google Scholar]
46. Baron-Cohen S., Lombardo M. V., Auyeung B., Ashwin E., Chakrabarti B., & Knickmeyer R. (2011). Why are autism spectrum conditions more prevalent in males? PLoS Biology, 9(6), e1001081. [PMC free article] [PubMed] [Google Scholar]
47. Lai M. C., Lombardo M. V., Chakrabarti B., Ecker C., Sadek S. A., Wheelwright S. J., et al. (2012). Individual differences in brain structure underpin empathizing–systemizing cognitive styles in male adults. Neuroimage, 61(4), 1347–1354. [PMC free article] [PubMed] [Google Scholar]
48. Egermann H., & McAdams S. (2013). Empathy and emotional contagion as a link between recognized and felt emotions in music listening. Music Perception: An Interdisciplinary Journal, 31(2), 139–156. [Google Scholar]
49. Rentfrow P. J., Goldberg L. R., & Levitin D. J. (2011). The structure of musical preferences: A five-factor model. Journal of Personality and Social Psychology, 100(6), 1139–1157. [PMC free article] [PubMed] [Google Scholar]
50. Rentfrow P. J., Goldberg L. R., Stillwell D. J., Kosinski M., Gosling S. D., & Levitin D. J. (2012). The song remains the same: A replication and extension of the MUSIC model. Music Perception, 30(2), 161–185. [PMC free article] [PubMed] [Google Scholar]