Why do we behave morally?


Selfless behavior and cooperation cannot be taken for granted. Mohammad Salahshour of the Max Planck Institute for Mathematics in the Sciences (now at Max Planck Institute of Animal Behavior), has used a game theory-based approach to show why it can be worthwhile for individuals to set self-interests aside.

One of the most fundamental questions facing humanity is: why do we behave morally? Because it is by no means self-evident that under certain circumstances we set our self-interest aside and put ourselves in the service of a group – sometimes to the point of self-sacrifice. Many theories have been developed to get to the bottom of this moral conundrum.

There are two well-known proposed solutions: that individuals help their relatives so that the common genes survive (kin selection), and that the principle of “you scratch my back and I’ll scratch yours” applies. If people help each other, everyone benefits in the end (principle of reciprocity).

Prisoner’s dilemma combined with a coordination game
Mathematician Mohammad Salahshour of the Max Planck Institute for Mathematics in the Sciences in Leipzig, Germany, has used the tools of game theory to explain the emergence of moral norms – because game theory studies how people make rational decisions in conflict situations.

For Salahshour, the question at the outset was: why do moral norms exist in the first place? And why do we have different, or even contrasting moral norms?

For example, while some norms such as “help others”, promote self-sacrificing behaviour, others, such as dress codes, appear not to have much to do with curbing selfishness.

To answer these questions, Salahshour coupled two games: first, the classic prisoner’s dilemma, in which two players must decide whether to cooperate for a small reward or betray themselves for a much larger reward (social dilemma). This game can be a typical example of a social dilemma, where success of a group as a whole requires individuals to behave selflessly.

In this game everybody loses out if too many members of a group behave selfishly, compared to a scenario in which everybody acts altruistically. However, if only a few individuals behave selfishly, they can receive a better outcome than their altruistic team members.

Second, a game that focuses on typical decisions within groups, such as a coordination task, distribution of resources, choice of a leader, or conflict resolution. Many of these problems can be ultimately categorized as coordination or anticoordination problems.

Without coupling the two games, it is clear that in the Prisoner’s Dilemma, cooperation does not pay off, and self-interested behaviour is the best choice from the individual’s perspective if there are enough people who act selflessly. But individuals who act selfishly are not able to solve coordination problems efficiently and lose a lot of resources due to failing to coordinate their activity.

The situation can be completely different when the results of the two games are considered as a whole and there are moral norms at work which favour cooperation: now cooperation in the prisoner’s dilemma can suddenly pay off because the gain in the second game more than compensates for the loss in the first game.

Out of self-interest to coordination and cooperation
As a result of this process, not only cooperative behaviour emerges, but also a social order. All individuals benefit from it – and for this reason, moral behaviour pay off for them.

“In my evolutionary model, there were no selfless behaviours at the beginning, but more and more moral norms emerged as a result of the coupling of the two games,” Salahshour reports.

“Then I observed a sudden transition to a system where there is a lot of cooperation.”

In this “moral state”, a set of norms of coordination evolve which help individuals to better coordinate their activity, and it is precisely through this that social norms and moral standards can emerge.

However, coordination norms favour cooperation: cooperation turns out to be a rewarding behaviour for the individual as well.

Mahammad Salahshour: “A moral system behaves like a Trojan horse: once established out of the individuals’ self-interest to promote order and organization, it also brings self-sacrificing cooperation”.

Through his work, Salahshour hopes to better understand social systems. “This can help improve people’s lives in the future,” he explains.

“But you can also use my game-theoretic approach to explain the emergence of social norms in social media. There, people exchange information and make strategic decisions at the same time – for example, who to support or what cause to support.”

Again, he said, two dynamics are at work at once: the exchange of information and the emergence of cooperative strategies. Their interplay is not yet well understood – but perhaps game theory will soon shed new light on this topical issue as well.

Social Anchoring of Right and Wrong
The first principle refers to the social implications of judgments about right and wrong. This has been emphasized as a defining characteristic of morality in different theoretical perspectives. For instance, Skitka (2010) and colleagues have convincingly argued that beliefs about what is morally right or wrong are unlike other attitudes or convictions (Mullen & Skitka, 2006; Skitka, Bauman, & Sargis, 2005; Skitka & Mullen, 2002). Instead, moral convictions are seen as compelling mandates, indicating what everyone “ought” to or “should” do. This has important social implications, as people also expect others to follow these behavioral guidelines. They are emotionally affected and distressed when this turns out not to be the case, find it difficult to tolerate or resolve such differences, and may even resort to violence against those who challenge their views (Skitka & Mullen, 2002).

This socially defined nature of moral guidelines is explicitly acknowledged in several theoretical perspectives on moral behavior. The Theory of Planned Behavior (e.g., Ajzen, 1991) offers a framework that clearly specifies how behavioral intentions are determined in an interplay of individual dispositions and social norms held by self-relevant others (Ajzen & Fishbein, 1974; Fishbein & Ajzen, 1974). For instance, research based on this perspective has been used to demonstrate that the adoption of moral behaviors, such as expressing care for the environment, can be enhanced when relevant others think this is important (Kaiser & Scheuthle, 2003).

In a similar vein, Haidt (2001) argued that judgments of what are morally good versus bad behaviors or character traits are specified in relation to culturally defined virtues. This allows shared ideas about right and wrong to vary, depending on the cultural, religious, or political context in which this is defined (Giner-Sorolla, 2012; Haidt & Graham, 2007; Haidt & Kesebir, 2010; Rai & Fiske, 2011). Haidt (2001) accordingly specifies that moral intuitions are developed through implicit learning of peer group norms and cultural socialization. This position is supported by empirical evidence showing how moral behavior plays out in groups (Graham, 2013; Graham & Haidt, 2010; Janoff-Bulman & Carnes, 2013). This work documents the different principles that (groups of) people use in their moral reasoning (Haidt, 2012). By connecting judgments about right and wrong to people’s group affiliations and social identities, this perspective clarifies why different religious, political, or social groups sometimes disagree on what is moral and find it difficult to understand the other position (Greene, 2013; Haidt & Graham, 2007).

We argue that all these notions point to the socially defined and identity-affirming properties of moral guidelines and moral behaviors. Conceptions of right and wrong reflect the values that people share with important others and are anchored in the social groups to which they (hope to) belong (Ellemers, 2017; Ellemers & Van den Bos, 2012; Ellemers & Van der Toorn, 2015; Leach, Bilali, & Pagliaro, 2015). This also implies that there is no inherent moral value in specific actions or overt displays, for instance, of empathy or helping. Instead, the same behaviors can acquire different moral meanings, depending on the social context in which they are displayed and the relations between actors and targets involved in this context (Blasi, 1980; Gray, Young, & Waytz, 2012; Kagan, 2018; Reeder & Spores, 1983).

Thus, a first question to be answered when reviewing the empirical literature, therefore, is whether and how the socially shared and identity relevant nature of moral guidelines—central to key theoretical approaches—is adressed in the studies conducted to examine human morality.

Conceptions of the Moral Self
A second principle that is needed to understand human morality—and expands evolutionary and biological approaches—is rooted in the explicit self-awareness and autobiographical narratives that characterize human self-consciousness, and moral self-views in particular (Hofmann, Wisneski, Brandt, & Skitka, 2014).

Because of the far-reaching implications of moral failures, people are highly motivated to protect their self-views of being a moral person (Pagliaro, Ellemers, Barreto, & Di Cesare, 2016; Van Nunspeet, Derks, Ellemers, & Nieuwenhuis, 2015). They try to escape self-condemnation, even when they fail to live up to their own moral standards. Different strategies have been identified that allow individuals to disengage their self-views from morally questionable actions (Bandura, 1999; Bandura, Barbaranelli, Caprara, & Pastorelli, 1996; Mazar, Amir, & Ariely, 2008). The impact of moral lapses or moral transgressions on one’s self-image can be averted by redefining one’s behavior, averting responsibility for what happened, disregarding the impact on others, or excluding others from the right to moral treatment, to name just a few possibilities.

A key point to note here is that such attempts to protect moral self-views are not only driven by the external image people wish to portray toward others. Importantly, the conviction that one qualifies as a moral person also matters for internalized conceptions of the moral self (Aquino & Reed, 2002; Reed & Aquino, 2003). This can prompt people, for instance, to forget moral rules they did not adhere to (Shu & Gino, 2012), to fail to recall their moral transgressions (Mulder & Aquino, 2013; Tenbrunsel, Diekmann, Wade-Benzoni, & Bazerman, 2010), or to disregard others whose behavior seems morally superior (Jordan & Monin, 2008).

As a result, the strong desire to think of oneself as a moral person not only enhances people’s efforts to display moral behavior (Ellemers, 2018; Van Nunspeet, Ellemers, & Derks, 2015). Instead, sadly, it can also prompt individuals to engage in symbolic acts to distance themselves from moral transgressions (Zhong & Liljenquist, 2006) or even makes them relax their behavioral standards once they have demonstrated their moral intentions (Monin & Miller, 2001). Thus, tendencies for self-reflection, self-consistency, and self-justification are both affected by and guide moral behavior, prompting people to adjust their moral reasoning as well as their judgments of others and to endorse moral arguments and explanations that help justify their own past behavior and affirm their worldviews (Haidt, 2001).

A second important question to consider when reviewing the empirical literature on morality, thus, is whether and how studies take into account these self-reflective mechanisms in the development of people’s moral self-views. From a theoretical perspective, it is therefore relevant to examine antecedents and correlates of tendencies to engage in self-defensive and self-justifying responses. From an empirical perspective, it also implies that it is important to consider the possibility that people’s self-reported dispositions and stated intentions may not accurately indicate or predict the moral behavior they display.

The Interplay Between Thoughts and Experiences
A third principle that connects different theoretical perspectives on human morality is the realization that this involves deliberate thoughts and ideals about right and wrong, as well as behavioral realities and emotional experiences people have, for instance, when they consider that important moral guidelines are transgressed by themselves or by others. Traditionally, theoretical approaches in moral psychology were based on the philosophical reasoning that is also reflected in legal and political scholarship on morality. Here, the focus is on general moral principles, abstract ideals, and deliberate decisions that are derived from the consideration of formal rules and their implications (Kohlberg, 1971; Turiel, 2006). Over the years, this perspective has begun to shift, starting with the observation made by Blasi (1980, p. 1) that

Few would disagree that morality ultimately lies in action and that the study of moral development should use action as the final criterion. But also few would limit the moral phenomenon to objectively observable behavior. Moral action is seen, implicitly or explicitly, as complex, imbedded in a variety of feelings, questions, doubts, judgments, and decisions . . . . From this perspective, the study of the relations between moral cognition and moral action is of primary importance.

This perspective became more influential as a result of Haidt’s (2001) introduction of “moral intuition” as a relevant construct. Questions about what comes first, reasoning or intuition, have yielded evidence showing that both are possible (e.g., Feinberg, Willer, Antonenko, & John, 2012; Pizarro, Uhlmann, & Bloom, 2003; Saltzstein & Kasachkoff, 2004). That is, reasoning can inform and shape moral intuition (the classic philosophical notion), but intuitive behaviors can also be justified with post hoc reasoning (Haidt’s position). The important conclusion from this debate thus seems to be that it is the interplay between deliberate thinking and intuitive knowing that shapes moral guidelines (Haidt, 2001, 2003, 2004). This points to the importance of behavioral realities and emotional experiences to understand how people reflect on general principles and moral ideals.

A first way in which this has been addressed resonates with the evolutionary survival value of moral guidelines to help avoid illness and contamination as sources of physical harm. In this context, it has been argued and shown that nonverbal displays of disgust and physical distancing can emerge as unthinking embodied experiences to morally aversive situations that may subsequently invite individuals to reason why similar situations should be avoided in the future (Schnall, Haidt, Clore, & Jordan, 2008; Tapp & Occhipinti, 2016). The social origins of moral guidelines are acknowledged in approaches explaining the role of distress and empathy as implicit cues that can prompt individuals to decide which others are worthy of prosocial behavior (Eisenberg, 2000). In a similar vein, the experience of moral anger and outrage at others who violate important guidelines is seen as indicating which guidelines are morally “sacred” (Tetlock, 2003). Experiences of disgust, empathy, and outrage all indicate relatively basic affective states that are marked with nonverbal displays and have direct implications for subsequent actions (Ekman, 1989; Ekman, 1992).

In addition, theoretical developments in moral psychology have identified the experience of guilt and shame as characteristic “moral” emotions. Compared with “primary” affective responses, these “secondary” emotions are used to indicate more complex, self-conscious states that are not immediately visible in nonverbal displays (Tangney & Dearing, 2002; Tangney, Stuewig, & Mashek, 2007). These moral emotions are seen to distinguish humans from most animals. Indeed, affording to others the perceived ability to experience such emotions communicates the degree to which we consider them to be human and worthy of moral treatment (Haslam & Loughnan, 2014). The nature of guilt and shame as “self-condemning” moral emotions indicates their function to inform self-views and guide behavioral adaptations rather than communicating one’s state to others.

At the same time, it has been noted that feelings of guilt and shame can be so overwhelming that they raise self-defensive responses that stand in the way of behavioral improvement (Giner-Sorolla, 2012). This can occur at the individual level as well as the group level, where the experience of “collective guilt” has been found to prevent intergroup reconciliation attempts (Branscombe & Doosje, 2004). Accordingly, it has been noted that the relations between the experience of guilt and shame as moral emotions and their behavioral implications depend very much on further appraisals relating to the likelihood of social rejection and self-improvement that guide self-forgiveness (Leach, 2017).

Regardless of which emotions they focus on, these theoretical perspectives all emphasize that moral concerns and moral decisions arise from situational realities, characterized by people’s experiences and the (moral) emotions these evoke. A third question emerging from theoretical accounts aiming to understand human morality, therefore, is whether and how the interplay between the thoughts people have about moral ideals (captured in principles, judgments, reasoning), on one hand, and the realities they experience (embodied behaviors, emotions), on the other, is explicitly addressed in empirical studies.

Understanding Moral Behavior
Our conclusion so far is that researchers in social psychology have displayed a considerable interest in examining topics relating to morality. However, it is not self-evident how the multitude of research topics and issues that are addressed in this literature can be organized. This is why we set out to organize the available research in this area into a limited set of meaningful categories by content-analyzing the publications we found to identify studies examining similar research questions. In the “Method” section, we provide a detailed explanation of the procedure and criteria we used to develop our coding scheme and to classify studies as relating to one of five research themes we extracted in this way. We now consider the nature of the research questions addressed within each of these themes and the rationales typically provided to study them, to specify how different research questions that are examined are seen to relate to each other. We visualize these hypothesized relations in Figure 1.

Figure 1.
The psychology of morality: connections between five research themes.

Researchers in this literature commonly cite the ambition to predict, explain, and influence Moral Behavior as their focal guideline for having an interest in examining some aspect of morality (see also Ellemers, 2017). We therefore place research questions relating to this theme at the center of Figure 1. Questions about behavioral displays that convey the moral tendencies of individuals or groups fall under this research theme. These include research questions that address implicit indicators of moral preferences or cooperative choices, as well as more deliberate displays of helping, cheating, or standing up for one’s principles.

Many researchers claim to address the likely antecedents of such moral behaviors that are located in the individual as well as in the (social) environment. Here, we include research questions relating to Moral Reasoning, which can reflect the application of abstract moral principles as well as specific life experiences or religious and political identities that people use to locate themselves in the world (e.g., Cushman, 2013). This work addresses moral standards people can adhere to, for instance, in the decision guidelines they adopt or in the way they respond to moral dilemmas or evaluate specific scenarios.

We classify research questions as referring to Moral Judgments when these address the dispositions and behaviors of other individuals, groups, or companies in terms of their morality. These are considered as relevant indicators of the reasons why and conditions under which people are likely to display moral behavior. Research questions addressed under this theme consider the characteristics and actions of other individuals and groups as examples of behavior to be followed or avoided or as a source of information to extract social norms and guidelines for one’s own behavior (e.g., Weiner, Osborne, & Rudolph, 2011).

We distinguish between these two clusters to be able to separate questions addressing the process of moral reasoning (to infer relevant decision rules) from questions relating to the outcome in the form of moral judgments (of the actions and character of others). However, the connecting arrow in Figure 1 indicates that these two types of research questions are often discussed in relation to each other, in line with Haidt’s (2001) reasoning that these are interrelated mechanisms and that moral decision rules can prescribe how certain individuals should be judged, just as person judgments can determine which decision rules are relevant in interacting with them.

We proceed by considering research questions that relate to the psychological implications of moral behavior. The immediate affective implications of one’s behavior, and how this reveals one’s moral reasoning as well as one’s judgments of others, are addressed in questions relating to Moral Emotions (Sheikh, 2014). These are the emotional responses that are seen to characterize moral situations and are commonly used to diagnose the moral implications of different events. Questions we classified under this research theme typically address feelings of guilt and shame that people experience with regard to their own behavior, or outrage and disgust in response to the moral transgressions of others.

Finally, we consider research questions addressing self-reflective and self-justifying tendencies associated with moral behavior. Studies aiming to investigate the moral virtue people afford to themselves and the groups they belong to, and the mechanisms they use for moral self-protection, are relevant for Moral Self-Views. Under this research theme, we subsume research questions that address the mechanisms people use to maintain self-consistency and think of themselves as moral persons, even when they realize that their behavior is not in line with their moral principles (see also Bandura, 1999).

Even though research questions often consider moral emotions and moral self-views as outcomes of moral behaviors and theorize about the factors preceding these behaviors, this does not imply that emotions and self-views are seen as the final end-states in this process. Instead, many publications refer to these mechanisms of interest as being iterative and assume that prior behaviors, emotions, and self-views also define the feedback cycles that help shape and develop subsequent reasoning and judgments of (self-relevant) others, which are important for future behavior. The feedback arrows in Figure 1 indicate this.

Our main goal in specifying how different types of research questions can be organized according to their thematic focus in this way is to offer a structure that can help monitor and compare the empirical approaches that are typically used to advance existing insights into different areas of interest. The relations depicted in Figure 1 represent the reasoning commonly provided to motivate the interest in different types of research questions. The location of the different themes in this figure clarifies how these are commonly seen to connect to each other and visualizes the (sometimes implicit) assumptions made about the way findings from different studies might be combined and should lead to cumulative insights. In the sections that follow, we will examine the empirical approaches used to address each of these clusters of research questions to specify the ways in which results from different types of studies actually complement each other and to identify remaining gaps in the empirical literature.

A Functionalist Perspective
An important feature of our approach is that we do not delineate research questions in terms of the specific moral concerns, guidelines, principles, or behaviors they address. Instead, we take a functionalist perspective in considering which mechanisms relevant to people’s thoughts and experiences relating to morality are examined to draw together the empirical evidence that is available. For each of the research themes described above, we therefore consider the empirical approaches that have been taken by identifying the nature of relevant functions or mechanisms that have been examined. This will help document the evidence that is available to support the notion that morality matters for the way people think about themselves, interact with others, live and work together in groups, and relate to other groups in society. In considering the different functions morality may have, we distinguish between four levels at which mechanisms in social psychology are generally studied (see also Ellemers, 2017; Ellemers & Van den Bos, 2012).

Intrapersonal Mechanisms
All the ways in which people consider, think, and reason by themselves to determine what is morally right refer to intrapersonal mechanisms. Even if these considerations are elicited by social norms or reflect the behavior observed in others, it is important to assess the extent to which they emerge as guiding principles for individuals to be used in their further reasoning, for their judgments of the self and others, for their behavioral displays, or for the emotions they experience. Thus, such intrapersonal mechanisms are relevant for questions relating to each of the five research themes we examine.

Interpersonal Mechanisms
The way people relate to others, respond to their moral behaviors, and connect to them tap into interpersonal mechanisms. Again we note that such mechanisms are relevant for research questions in all five research themes, as relations with others can inform the way people reason about morality, the way they judge other individuals or groups, the way they behave, as well as the emotions they experience and the self-views they have.

Intragroup Mechanisms
The role of moral concerns in defining group norms, the tendency of individuals to conform to such norms, and their resulting inclusion versus exclusion from the group all indicate intragroup mechanisms relevant to morality. Considering how groups influence individuals is relevant for our understanding of the way people reason about morality and the way they judge others. It also helps us understand the moral behavior individuals are likely to display (for instance, in public vs. private situations), the emotions they experience in response to the transgression of specific moral rules by themselves or different others, and the self-views they develop about their morality.

Intergroup Mechanisms
The tendency for social groups to endorse specific moral guidelines as a way to define their distinct identity, disagreements between groups about the nature or implications of important values, or moral concerns that stem from conflicts between groups in society all refer to intergroup mechanisms relevant to morality. Here too, examination of such mechanisms is relevant to research questions in each of the five research themes we distinguish. These may inform the tendency to interpret the prescription to be “fair” differently, depending on the identity of the recipients of such fairness, which helps understand people’s moral reasoning and the way they judge the morality of others. Intergroup relations may also help understand the tendency to behave differently toward members of different groups, as well as the emotions and self-views relating to such behaviors.

In sum, we argue that each of these four levels of analysis offers potentially relevant approaches to understand the mechanisms that can shape people’s moral concerns and their judgments of others. Mechanisms at all four levels can also affect moral behavior and have important implications for the emotions people experience and the self-views they hold. Reviewing whether and how empirical research has addressed relevant mechanisms at these four levels thus offers a better understanding of how morality operates in the social regulation of individual behavior (see also Carnes, Lickel, & Janoff-Bulman, 2015; Ellemers, 2017; Janoff-Bulman & Carnes, 2013).

reference link :https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6791030/

Original Research: Open access.
Interaction between games give rise to the evolution of moral norms of cooperation” by Mohammad Salahshour. PLOS Computational Biology


Please enter your comment!
Please enter your name here

Questo sito usa Akismet per ridurre lo spam. Scopri come i tuoi dati vengono elaborati.