A research team from the Department of Computer Science and Engineering and the Electronics-Inspired Interdisciplinary Research Institute at Toyohashi University of Technology has indicated that the relationship between attentional states in response to pictures and sounds and the emotions elicited by them may be different in visual perception and auditory perception.
This result was obtained by measuring pupillary reactions related to human emotions. It suggests that visual perception elicits emotions in all attentional states, whereas auditory perception elicits emotions only when attention is paid to sounds, thus showing the differences in the relationships between attentional states and emotions in response to visual and auditory stimuli.
In our daily lives, our emotions are often elicited by the information we receive from visual and auditory perception. As such, many studies up until now have investigated human emotional processing using emotional stimuli such as pictures and sounds.
However, it was not clear whether such emotional processing differed between visual and auditory perception.
Our research team asked participants in the experiment to perform four tasks to alert them to various attentional states when they were presented with emotionally arousing pictures and sounds, in order to investigate how emotional responses differed between visual and auditory perception.
We also compared the pupillary responses obtained by eye movement measurements as a physiological indicator of emotional responses. As a result, visual perception (pictures) elicited emotions during the execution of all tasks, whereas auditory perception (sounds) did so only during the execution of tasks where attention was paid to the sounds.
These results suggest that there are differences in the relationship between attentional states and emotional responses to visual and auditory stimuli.
“Traditionally, subjective questionnaires have been the most common method for assessing emotional states. However, in this study, we wanted to extract emotional states while some kind of task was being performed.
We therefore focused on pupillary response, which is receiving a lot of attention as one of the biological signals that reflect cognitive states. Although many studies have reported about attentional states during emotional arousal owing to visual and auditory perception, there have been no previous studies comparing these states across senses, and this is the first attempt”, explains the lead author, Satoshi Nakakoga, Ph. D. student.
Besides, Professor Tetsuto Minami, the leader of the research team, said, “There are more opportunities to come into contact with various visual media via smartphones and other devices and to evoke emotions through that visual and auditory information.
We will continue investigating about sensory perception that elicits emotions, including the effects of elicited emotions on human behavior.”
Based on the results of this research, our research team indicates the possibility of a new method of emotion regulation in which the emotional responses elicited by a certain sense are promoted or suppressed by stimuli input from another sense. Ultimately, we hope to establish this new method of emotion regulation to help treat psychiatric disorders such as panic and mood disorders.
Funding: The study was funded by the National Institutes of Health, the TUT Program in Personalized Health, the National Center for Research Resources, the National Center for Advancing Translational Sciences, the Howard Hughes Medical Institute, the W.M. Keck Foundation, and the George S. and Delores Doré Eccles Foundation.
Emotions are expressed and perceived in many different sensory domains, multimodally , with emotional information conveyed via faces, voices, odors, touch, and body posture or movement [2,3,4,5].
Our ability to infer the emotional state of others, identify the potential threat they pose, and act accordingly is crucial to social interaction. While more static information conveyed by a face, such as gender or race, can be extracted by visual information alone, more dynamic information, such as emotional state, is often conveyed by a combination of emotional faces and voices.
Many studies have examined emotional processing in a given sensory domain, yet, few have considered faces and voices together, a more common experience, which can take advantage of multimodal processes that may allow for more optimal information processing.
Processing Emotion across the Senses
From very early on we can make use of emotional information from multiple sources . For example, infants are able to discriminate emotions by 4 months of age if exposed to stimuli in two different modalities (bimodal), but, only by 5 months of age if exposed to auditory stimuli alone.
Likewise, infants are able to recognize emotions by about 5 months if stimuli are bimodal, but not until 7 months if exposed to visual stimuli alone. Starting around 5 months, infants make crossmodal matches between faces and voices [7,8], and by 6.5 months can also make use of body posture information in the absence of face cues .
Crossmodal matches also take into account the number of individual faces and voices, with infants, starting at 7 months, showing a looking preference for visual stimuli that match auditory stimuli in numerosity .
Combining behavioral and event-related potential (ERP) methods, Vogel and colleagues  examined the development of the “other-race bias”, the tendency to better discriminate identities of your own race versus identities of a different race.
The authors described a perceptual narrowing effect in behavior and brain responses. They found no effect of race on crossmodal emotional matching and no race-modulated congruency effect in neuronal activity in five-month-olds, but found such effects in nine-month-olds, who could only distinguish faces of their own race.
Furthermore, seven-month-olds can discriminate between congruent (matching in emotional valence) and incongruent (non-matching in emotional valence) face/voice pairs , with a larger negative ERP response to incongruent versus congruent face/voice stimuli and a larger positive ERP response to congruent versus incongruent stimuli.
These studies in infants, measuring crossmodal matching of emotional stimuli and perceptual advantages in detecting and discriminating emotional information based on bimodal stimulus presentations and the congruency between stimuli, laid important groundwork for the processing of emotional information across the senses.
Studies in adults have been more focused on how emotional information in one sense might influence the judgement of emotional information in another sense. To go beyond crossmodal matching or changes in the detection or discrimination of bimodal versus unimodal emotional stimuli, adaptation has been used, mostly in adults, to quantify by how much emotional information in one modality, such as audition, can bias the processing of emotional information in another modality, such as vision.
Exposure to Emotion: Perceptual Changes
A powerful tool, adaptation, has been deemed the psychophysicist’s electrode and has been used to reveal the space in which faces are represented. In adaptation, repeated exposure to a stimulus downregulates neuronal firing in response to that stimulus and can yield a perceptual change, a contrastive after-effect.
For example, repeated exposure to female faces can bias androgynous faces to appear more masculine . Previous work has shown that many features of a face can be adapted, such as gender, ethnicity, and even emotions (for a review, see Reference ).
Adapting to emotional information can bias perception, producing a contrastive after-effect within a sensory modality, either visual or auditory [13,14,15,16]. Repeated exposure to positive faces produces a bias to perceive neutral faces as angry, while repeated exposure to negative faces produces a bias to perceive neutral faces as happy.
Complementary biases are found when perceiving neutral sounds after exposure to emotional sounds . Furthermore, the representation of emotion has been shown to be supramodal, with repeated exposure to emotional information in one sensory modality transferring to yield a contrastive after-effect in another sensory modality never directly exposed [17,18,19].
Although emotional information can be adapted within and across the senses, faces and voices often occur simultaneously. Yet, few studies have examined if there is a perceptual advantage to presenting visual and auditory stimuli concurrently and results have been inconclusive.
For example, de Gelder and Vroomen  found that an emotional voice, happy or sad, could bias perception of a simultaneously presented neutral face to match that of the voice. Similarly, Muller and colleagues  found that negative emotional sounds, e.g., screams, could bias perception of a simultaneously presented neutral face to appear more fearful, compared to neutral emotional sounds, e.g., yawns.
However, Fox and Barton  did not find biased facial perception from emotional sounds. In a related study, using an adaptation paradigm, Wang and colleagues  also found no benefit, no increased adaptation, when visual and auditory stimuli were presented together and matched in emotional valence (congruent) compared to when a unimodal visual stimulus was presented in isolation, suggesting that emotional auditory information carried little weight in biasing emotional visual information.
Some discrepancies in results across studies might arise from differences in experimental paradigms. For example, adaptation paradigms may not have been optimized to adapt to emotion per se.
Since adaptation effects are stronger when adapting to the same face versus different faces  or for easily recognized face/voice pairs, prior studies have often used few exemplar faces and voices. However, if one wants to test for interactions between visual and auditory emotional information, providing many exemplars helps assure one is not adapting to the unique configuration of features of a given face or voice, but rather to emotion.
Furthermore, if only a few faces and voices are used and presented as unique pairs during adaptation, presentation of a single stimulus in one modality after adaptation might induce imagery of a stimulus in the other modality due to the associations formed during adaptation.
In such a scenario, learned associations induce imagery which then might appear as a strengthening of adaptation effects for stimuli across modalities. In order to promote adaptation to emotion rather than to unique configurations of features of a given face, and to prevent induced imagery of an associated stimulus in the other modality, the current study used 30 unique faces and 15 unique crowd sounds presented at random during adaptation.
Furthermore, we used crowd sounds, where multiple voices are presented at once, as another way to ensure that unique face/voice pairs did not get formed and to ensure adaptation is to emotion, and not to characteristics of a particular voice or a particular face/voice pair.
While many previous studies have used only a few exemplars of face/voice pairs, crowd stimuli can be more informative than single identities. For instance, the gaze of a group is more effective at directing attention than the gaze of an individual .
In situations where multiple stimuli are presented at once, it has been shown that participants extract the mean emotion of the stimuli without representing individual characteristics [24,25]. Thus, we expected not only that participants in the current study would efficiently extract the emotional information from multiple voices without representing characteristics of individual voices, but that this information from a crowd would be more informative than information from a single identity.
Exposure to Emotion: Cortisol Changes
Interestingly, not only can repeated exposure to emotional information alter perception, it can also induce changes in mood, such that exposure to positive emotional content can induce a positive mood and bias perception of faces to be more positive, while induction of a negative mood can bias perception of faces to be more negative (reviewed in Reference ).
Furthermore, the initial mood of the participant can bias perception, such that a more positive mood at baseline can bias faces to be perceived as more positive [27,28].
In considering how emotional information might alter a physiological marker for the stress response, particularly for negative emotional exposure, we assessed cortisol levels. Cortisol excretion is the final product of hypothalamic–pituitary–adrenocortical (HPA) axis activation in response to stress .
Salivary cortisol levels have been used as a non-invasive biomarker of stress response (e.g., [30,31]). Cortisol has also been linked to attention and arousal, with higher cortisol levels correlated with increased attention  and have been linked to enhanced emotional face processing .
Furthermore, changes in cortisol have been linked to induced negative emotional state such that negative mood induction is associated with elevated cortisol [34,35].
Although exposure to emotional information can alter stress levels, the relationship between changes in perception and changes in cortisol following emotional exposure is not well understood.
Of note, repeated exposure to emotional information yields opposite effects on perception and mood. While repeated exposure to negative facial emotion biases perception to be more positive (contrastive after-effect), it biases mood to be more negative.
It remains to be seen if changes in cortisol positively or negatively correlate with changes in perception. Furthermore, although many studies have investigated the effects of exposure to emotional faces, it is unclear how emotions conveyed by other senses, such as voices, may interact to bias perception and cortisol.
The current study utilized an adaptation paradigm to investigate perceptual shifts, cortisol shifts, and their correlation as a function of exposure to visual and/or auditory emotional information. Participants were exposed to angry faces with or without concurrent emotional sounds that matched (congruent) or did not match (incongruent) facial emotion.
We quantified post-adaptation perceptual changes, normalized to baseline perceptual biases, and post-adaptation cortisol changes during the same exposure, normalized to baseline cortisol biases, uniquely for each participant.
In line with perceptual after-effects, we expected adaptation to negative emotional information would bias perception to be more positive, with stronger effects for congruent versus incongruent emotions.
We also assessed perceptual effects post-adaptation to only visual or only auditory emotional information. These conditions provide baseline measures by which to assess differences between congruent and incongruent conditions.
Namely, they can distinguish if a congruent emotion enhanced or an incongruent emotion suppressed relative to a baseline measure within a single modality. Given the results of Wang and colleagues , we expected the weakest effects following adaptation to only auditory emotional stimuli and expected congruent effects to be stronger and incongruent effects to be weaker than a visual only baseline.
In line with stress-induced changes in cortisol, we expected exposure to negative emotional information would decrease cortisol if in accord with perceptual effects but increase cortisol if in accord with mood effects.
We expected cortisol changes to be largest for congruent emotions and weakest for only auditory emotions. Given pilot data suggesting our negative emotional stimuli were not acutely threatening and not very effective at increasing cortisol, we expected differences in the relative decrease in cortisol across adaptation conditions.
We assessed the relative contributions of visual and auditory emotional information in biasing changes in perception and cortisol and the correlation between the strength of changes in perception and changes in cortisol.
Unlike previous work using unique face-voice pairings for only a few individual face identities, we used a wide range of facial identities and unassociated emotional crowd sounds to assess emotional processing.
We hypothesized that (1) the emotion perceived in a face would show a positive bias post-exposure to negative emotional information, in accord with contrastive perceptual after-effects, and that (2) such after-effects would vary based on whether emotion was conveyed by visual and/or auditory information and whether visual and auditory emotional valence matched.
Overall, we found exposure to negative emotions yielded positive perceptual biases (positive PSE shift) in all but the auditory only adaptation condition, which showed no effect.
In accord with our expectations and replicating previous literature, PSE shifts were weakest following only auditory emotional exposure. Contrary to our expectations, the magnitude of PSE shifts did not differ for congruent versus incongruent emotions nor for congruent versus visual only emotions.
The failure to find a benefit for congruent versus visual only adaptation was also noted in a previous study using unique face-voice pairings and finding no benefit, no increased PSE shift, for congruent visual and auditory happy emotions versus only visual happy emotions .
We hypothesized that cortisol would decrease after exposure to negative emotional information and that decreases would vary based on whether the emotion was conveyed by visual or auditory information and whether visual and auditory emotional valence matched. Overall, we found cortisol decreased after exposure to negative emotional information, but we found no significant differences as a function of adaptation condition.
Given the variability of perceptual and cortisol shifts across individuals, we also tested the correlation between the magnitude of perceptual shifts and cortisol shifts. Here, we found a significant negative correlation across participants, such that the stronger the positive bias in perceiving a face after exposure to negative emotional content the stronger the decrease in cortisol.
While perceptual shifts correlated with cortisol shifts, baseline cortisol levels did not correlate with baseline perceptual biases. Thus, underlying baseline differences could not account for the correlations we observed between shifts in perception and shifts in cortisol.
Is Cortisol a Proxy for Stress, Arousal or Attention?
Our results highlight that changes in cortisol may correlate with changes in perception: the more that exposure to angry emotions biases faces to be perceived as happy the more pronounced the decrease in cortisol. This is contrary to what one might expect if changes in cortisol correlated with changes in mood.
Thus, while repeated exposure to negative emotional content increases positive biases in perception, such exposure increases negative biases in mood. One would expect negative biases in mood to correlate with a less pronounced decrease in cortisol, or even an increase in cortisol.
Yet, we find that exposure to negative emotional content yielded larger, not smaller, decreases in cortisol such that the larger the decrease in cortisol the greater the increase in positive perceptual bias. Thus, in our paradigm and with our emotional stimuli, changes in cortisol correlate with changes in perception rather than changes in mood.
One might have expected changes in cortisol to serve as a proxy for changes in mood since some studies find a correlation between baseline mood and baseline cortisol levels. For example, some studies find that positive affect correlates with decreased cortisol and negative affect with increased cortisol levels [48,49].
Yet, while several studies find urinary and salivary cortisol levels are associated with anxiety and depression [50,51], or with negative state affect [34,48], other studies do not find an association between negative trait affect and cortisol level [51,52]. Taken together, these studies do not provide a clear picture of the relationship between positive or negative state affect and cortisol levels.
Of note, although cortisol has long been known as the stress hormone, it has also been referred to as a marker of attention and arousal. Van Honk and colleagues  found that baseline cortisol levels correlated with the probability of orienting away from threatening faces, suggesting that individuals with higher baseline cortisol levels had higher levels of arousal, or more engaged attention, and thus responded more quickly to threatening faces than individuals with lower cortisol levels.
More recently, Kagan  highlighted that the term “stress” is assigned too broadly and should be reserved for describing reactions to an experience that directly threatens an organism. He describes cortisol as a marker for exploratory activity and responses to novel situations.
In response to Kagan  and McEwen and McEwen  call for more investigations into epigenetic factors that might underlie cortisol responses to positive and negative events to help clarify distinctions between “good stress”, “tolerable stress”, and “toxic stress”. They emphasize that responses to stressors are highly individualized and that early life stressors may result in a reduced ability to cope in certain stressful situations.
1. Robins D.L., Hunyadi E., Schultz R.T. Superior temporal activation in response to dynamic audio-visual emotional cues. Brain Cogn. 2009;69:269–278. doi: 10.1016/j.bandc.2008.08.007. [PMC free article]
2. Bestelmeyer P.E.G., Maurage P., Rouger J., Latinus M., Belin P. Adaptation to Vocal Expressions Reveals Multistep Perception of Auditory Emotion. J. Neurosci. 2014;34:8098–8105. doi: 10.1523/JNEUROSCI.4820-13.2014. [PMC free article]
3. Herz R.S., McCall C., Cahill L. Hemispheric Lateralization in the Processing of Odor Pleasantness versus Odor Names. Chem. Sens. 1999;24:691–695. doi: 10.1093/chemse/24.6.691.
4. Niedenthal P.M. Embodying Emotion. Science. 2007;316:1002–1005. doi: 10.1126/science.1136930.
5. Zald D.H., Pardo J.V. Emotion, olfaction, and the human amygdala: Amygdala activation during aversive olfactory stimulation. Proc. Natl. Acad. Sci. USA. 1997;94:4119–4124. doi: 10.1073/pnas.94.8.4119. [PMC free article]
6. Flom R., Bahrick L.E. The Development of Infant Discrimination of Affect in Multimodal and Unimodal Stimulation: The Role of Intersensory Redundancy. Dev. Psychol. 2007;43:238–252. doi: 10.1037/0012-16126.96.36.199. [PMC free article]
7. Grossmann T., Striano T., Friederici A.D. Crossmodal integration of emotional information from face and voice in the infant brain. Dev. Sci. 2006;9:309–315. doi: 10.1111/j.1467-7687.2006.00494.x.
8. Vogel M., Monesson A., Scott L.S. Building biases in infancy: The influence of race on face and voice emotion matching. Dev. Sci. 2012;15:359–372. doi: 10.1111/j.1467-7687.2012.01138.x.
9. Zieber N., Kangas A., Hock A., Bhatt R.S. The development of intermodal emotion perception from bodies and voices. J. Exp. Child Psychol. 2014;126:68–79. doi: 10.1016/j.jecp.2014.03.005.
10. Jordan K.E., Brannon E.M. The multisensory representation of number in infancy. Proc. Natl. Acad. Sci. USA. 2006;103:3486–3489. doi: 10.1073/pnas.0508107103. [PMC free article]
11. Little A.C., Feinberg D.R., DeBruine L.M., Jones B.C. Adaptation to Faces and Voices: Unimodal, Cross-Modal, and Sex-Specific Effects. Psychol. Sci. 2013;24:2297–2305. doi: 10.1177/0956797613493293.
12. Webster M.A., MacLeod D.I.A. Visual adaptation and face perception. Philos. Trans. R. Soc. B Biol. Sci. 2011;366:1702–1725. doi: 10.1098/rstb.2010.0360. [PMC free article]
13. Bestelmeyer P.E., Rouger J., Debruine L.M., Belin P. Auditory adaptation in vocal affect perception. Cognition. 2010;117:217–223. doi: 10.1016/j.cognition.2010.08.008.
14. Hsu S., Young A.W., Stolz J.A., Besner D., Carr T.H. Adaptation effects in facial expression recognition. Vis. Cogn. 2004;12:284–336. doi: 10.1080/13506280444000030.
15. Rutherford M.D., Chattha H.M., Krysko K.M. The use of aftereffects in the study of relationships among emotion categories. J. Exp. Psychol. Hum. Percept. Perform. 2008;34:27–40. doi: 10.1037/0096-15188.8.131.52.
16. Webster M.A., Kaping D., Mizokami Y., Duhamel P. Adaptation to natural facial categories. Nature. 2004;428:557–561. doi: 10.1038/nature02420.
17. Pye A., Bestelmeyer P.E. Evidence for a supra-modal representation of emotion from cross-modal adaptation. Cognition. 2015;134:245–251. doi: 10.1016/j.cognition.2014.11.001.
18. Skuk V.G., Schweinberger S.R. Adaptation Aftereffects in Vocal Emotion Perception Elicited by Expressive Faces and Voices. PLoS ONE. 2013;8:e81691. doi: 10.1371/journal.pone.0081691. [PMC free article]
19. Wang X., Guo X., Chen L., Liu Y., Goldberg M.E., Xu H. Auditory to Visual Cross-Modal Adaptation for Emotion: Psychophysical and Neural Correlates. Cereb. Cortex. 2016;27:1337–1346. doi: 10.1093/cercor/bhv321. [PMC free article]
20. De Gelder B., Vroomen J. The perception of emotions by ear and by eye. Cogn. Emot. 2000;14:289–311. doi: 10.1080/026999300378824.
21. Müller V.I., Habel U., Derntl B., Schneider F., Zilles K., Turetsky B.I., Eickhoff S.B. Incongruence effects in crossmodal emotional integration. NeuroImage. 2011;54:2257–2266. doi: 10.1016/j.neuroimage.2010.10.047.
22. Fox C.J., Barton J.J. What is adapted in face adaptation? The neural representations of expression in the human visual system. Brain Res. 2007;1127:80–89. doi: 10.1016/j.brainres.2006.09.104.
23. Gallup A.C., Hale J.J., Sumpter D.J.T., Garnier S., Kacelnik A., Krebs J.R., Couzin I.D. Visual attention and the acquisition of information in human crowds. Proc. Natl. Acad. Sci. USA. 2012;109:7245–7250. doi: 10.1073/pnas.1116141109. [PMC free article]
24. Haberman J., Harp T., Whitney D. Averaging facial expression over time. J. Vis. 2009;9:1–13. doi: 10.1167/9.11.1. [PMC free article]
25. Haberman J., Whitney D. Rapid extraction of mean emotion and gender from sets of faces. Curr. Biol. 2007;17:R751–R753. doi: 10.1016/j.cub.2007.06.039. [PMC free article]
26. Westermann R., Spies K., Stahl G., Hesse F.W. Relative effectiveness and validity of mood induction procedures: A meta-analysis. Eur. J. Soc. Psychol. 1996;26:557–580. doi: 10.1002/(SICI)1099-0992(199607)26:4<557::AID-EJSP769>3.0.CO;2-4.
27. Harris D.A., Hayes-Skelton S.A., Ciaramitaro V.M. What’s in a Face? How Face Gender and Current Affect Influence Perceived Emotion. Front. Psychol. 2016;7:9. doi: 10.3389/fpsyg.2016.01468. [PMC free article]
28. Jackson M.C., Arlegui-Prieto M. Variation in normal mood state influences sensitivity to dynamic changes in emotional expression. Emotion. 2016;16:145–149. doi: 10.1037/emo0000126.
29. Pruessner J., Wolf O., Hellhammer D., Buske-Kirschbaum A., Von Auer K., Jobst S., Kaspers F., Kirschbaum C. Free Cortisol Levels after Awakening: A Reliable Biological Marker for the Assessment of Adrenocortical Activity. Life Sci. 1997;61:2539–2549. doi: 10.1016/S0024-3205(97)01008-4.
30. Kalin N.H., Larson C., Shelton S.E., Davidson R.J. Asymmetric frontal brain activity, cortisol, and behavior associated with fearful temperament in rhesus monkeys. Behav. Neurosci. 1998;112:286–292. doi: 10.1037/0735-7044.112.2.286.
31. Kirschbaum C., Hellhammer D. Response variability of salivary cortisol under psychological stimulation. J. Clin. Chem. Clin. Biochem. Z. fur Klin. Chem. und Klin. Biochem. 1989;27:237.
32. Van Honk J., Tuiten A., Hout M.V.D., Koppeschaar H., Thijssen J., De Haan E., Verbaten R. Baseline salivary cortisol levels and preconscious selective attention for threat. A pilot study. Psychoneuroendocrinology. 1998;23:741–747. doi: 10.1016/S0306-4530(98)00047-X.
33. Van Peer J.M., Spinhoven P., Van Dijk J.G., Roelofs K. Cortisol-induced enhancement of emotional face processing in social phobia depends on symptom severity and motivational context. Boil. Psychol. 2009;81:123–130. doi: 10.1016/j.biopsycho.2009.03.006.
34. Buchanan T.W., Al’Absi M., Lovallo W.R. Cortisol fluctuates with increases and decreases in negative affect. Psychoneuroendocrinology. 1999;24:227–241. doi: 10.1016/S0306-4530(98)00078-X.
35. Gadea M., Gomez C., González-Bono E., Espert R., Salvador A. Increased cortisol and decreased right ear advantage (REA) in dichotic listening following a negative mood induction. Psychoneuroendocrinology. 2005;30:129–138. doi: 10.1016/j.psyneuen.2004.06.005.
36. Watson D., Clark L.A., Tellegen A. Development and validation of brief measures of positive and negative affect: The PANAS scales. J. Pers. Soc. Psychol. 1988;54:1063–1070. doi: 10.1037/0022-35184.108.40.2063.
37. Brainard D.H. The Psychophysics Toolbox. Spat. Vis. 1997;10:433–436. doi: 10.1163/156856897X00357.
38. Pelli D.G. The VideoToolbox software for visual psychophysics: Transforming numbers into movies. Spat. Vis. 1997;10:437–442. doi: 10.1163/156856897X00366.
39. Kleiner M., Brainard D., Pelli D., Ingling A., Murray R., Broussard C. What’s new in Psychtoolbox-3? Perception 36 ECVP Abstract Supplement. Percaption. 2007;36:1–16.
40. Tottenham N., Tanaka J.W., Leon A.C., McCarry T., Nurse M., Hare T.A., Marcus D.J., Westerlund A., Casey B., Nelson C. The NimStim set of facial expressions: Judgments from untrained research participants. Psychiatry Res. 2009;168:242–249. doi: 10.1016/j.psychres.2008.05.006. [PMC free article]
41. Harris D.A., Ciaramitaro V.M. Interdependent Mechanisms for Processing Gender and Emotion: The Special Status of Angry Male Faces. Front. Psychol. 2016;7:836. doi: 10.3389/fpsyg.2016.01046. [PMC free article]
42. Fründ I., Hänel V., Wichmann F. Psignifit. [(accessed on 1 September 2017)]; Available online: https://uni-tuebingen.de/en/faculties/faculty-of-science/departments/computer-science/lehrstuehle/neural-information-processing/research/resources/software/psignifit.
43. Wichmann F.A., Hill N.J. The psychometric function: I. Fitting, sampling, and goodness of fit. Percept. Psychophys. 2001;63:1293–1313. doi: 10.3758/BF03194544.
44. Mumenthaler C., Roesch E.B., Sander D., Kerzel D., Scherer K.R. Psychophysics of emotion: The QUEST for Emotional Attention. J. Vis. 2010;10:1–9.
45. Kirschbaum C., Hellhammer D.H. In: Encyclopedia of Stress. Fink G., editor. Volume 3. Academic Press; New York, NY, USA: 2000. pp. 379–383.
46. Kudielka B.M., Hellhammer D., Wüst S. Why do we respond so differently? Reviewing determinants of human salivary cortisol responses to challenge. Psychoneuroendocrinology. 2009;34:2–18. doi: 10.1016/j.psyneuen.2008.10.004.
47. Lovallo W.R., Farag N.H., Vincent A.S., Thomas T.L., Wilson M.F. Cortisol responses to mental stress, exercise, and meals following caffeine intake in men and women. Pharmacol. Biochem. Behav. 2006;83:441–447. doi: 10.1016/j.pbb.2006.03.005. [PMC free article]
48. Smyth J., Ockenfels M.C., Porter L., Kirschbaum C., Hellhammer D.H., Stone A.A. Stressors and mood measured on a momentary basis are associated with salivary cortisol secretion. Psychoneuroendocrinology. 1998;23:353–370. doi: 10.1016/S0306-4530(98)00008-0.
49. Steptoe A., Wardle J., Marmot M. Positive affect and health-related neuroendocrine, cardiovascular, and inflammatory processes. Proc. Natl. Acad. Sci. USA. 2005;102:6508–6512. doi: 10.1073/pnas.0409174102. [PMC free article]
50. Schaeffer M.A., Baum A. Adrenal Cortical Response to Stress at Three Mile Island. Psychosom. Med. 1984;46:227–237. doi: 10.1097/00006842-198405000-00005.
51. Van Eck M., Berkhof H., Nicolson N., Sulon J. The Effects of Perceived Stress, Traits, Mood States, and Stressful Daily Events on Salivary Cortisol. Psychosom. Med. 1996;58:447–458. doi: 10.1097/00006842-199609000-00007.
52. Clark L., Iversen S., Goodwin G. The influence of positive and negative mood states on risk taking, verbal fluency, and salivary cortisol. J. Affect. Disord. 2001;63:179–187. doi: 10.1016/S0165-0327(00)00183-X.
53. Kagan J. An overly permissive extension. Perspect. Psychol. Sci. 2016;11:442–450. doi: 10.1177/1745691616635593.
54. McEwen B.S., McEwen C.A. Response to Jerome Kagan’s Essay on Stress. Perspect. Psychol. Sci. 2016;11:451–455. doi: 10.1177/1745691616646635.