Psychological research shows that people tend to attribute successes to their internal abilities, while they blame failures on external circumstances such as unfair processes. Previous experimental studies have found that egotistical and immoral behavior increases when processes have been unfair.
The study from Linköping University shows that the same tendencies can emerge in competitive processes where none of the participants have had any advantage or disadvantage.
“When we fail, we overestimate how unfair the situation has been.
This increases the risk that we become more egotistical and immoral.
For instance, it can result in employees starting to trash talk their colleagues in a recruitment process”, says Kajsa Hansson, doctoral student in economics at Linköping University’s Department of Management and Engineering (IEI) and JEDI Lab.
The researchers at Linköping University wanted to investigate whether a selfish behavior can be affected by providing information about the procedural fairness in a competitive situation.
The results have been published in the Journal of Economic Behavior and Organization
They also investigated whether information about the procedural fairness in the competition affected how winners and losers distributed money between each other.
The results showed that the losers took less money from the winner if they received information that the competition was fair.
Thus, the information made them less selfish. Losers who were not given any information about the fairness in the competition overestimated how unfair the competition had been. However, the winners’ behavior was not affected at all by them receiving information that the competition had been fair.
“We see that uncertainty about the fairness in a competitive situation makes people more selfish. But information and transparency can reduce this type of behavior.
The take-home message from this study is that if we want to create a more ethical and fair society implementing fair processes is important – but informing people about this procedural fairness can be just as important”, says Kajsa Hansson.
In addition to Kajsa Hansson, two researchers from Linköping University, Emil Persson and Gustav Tinghög, as well as Shai Davidai from Columbia Business School, contributed to the study.
Social Anchoring of Right and Wrong
The first principle refers to the social implications of judgments about right and wrong. This has been emphasized as a defining characteristic of morality in different theoretical perspectives. For instance, Skitka (2010) and colleagues have convincingly argued that beliefs about what is morally right or wrong are unlike other attitudes or convictions (Mullen & Skitka, 2006; Skitka, Bauman, & Sargis, 2005; Skitka & Mullen, 2002).
Instead, moral convictions are seen as compelling mandates, indicating what everyone “ought” to or “should” do. This has important social implications, as people also expect others to follow these behavioral guidelines. They are emotionally affected and distressed when this turns out not to be the case, find it difficult to tolerate or resolve such differences, and may even resort to violence against those who challenge their views (Skitka & Mullen, 2002).
This socially defined nature of moral guidelines is explicitly acknowledged in several theoretical perspectives on moral behavior. The Theory of Planned Behavior (e.g., Ajzen, 1991) offers a framework that clearly specifies how behavioral intentions are determined in an interplay of individual dispositions and social norms held by self-relevant others (Ajzen & Fishbein, 1974; Fishbein & Ajzen, 1974).
For instance, research based on this perspective has been used to demonstrate that the adoption of moral behaviors, such as expressing care for the environment, can be enhanced when relevant others think this is important (Kaiser & Scheuthle, 2003).
In a similar vein, Haidt (2001) argued that judgments of what are morally good versus bad behaviors or character traits are specified in relation to culturally defined virtues. This allows shared ideas about right and wrong to vary, depending on the cultural, religious, or political context in which this is defined (Giner-Sorolla, 2012; Haidt & Graham, 2007; Haidt & Kesebir, 2010; Rai & Fiske, 2011). Haidt (2001) accordingly specifies that moral intuitions are developed through implicit learning of peer group norms and cultural socialization.
This position is supported by empirical evidence showing how moral behavior plays out in groups (Graham, 2013; Graham & Haidt, 2010; Janoff-Bulman & Carnes, 2013). This work documents the different principles that (groups of) people use in their moral reasoning (Haidt, 2012). By connecting judgments about right and wrong to people’s group affiliations and social identities, this perspective clarifies why different religious, political, or social groups sometimes disagree on what is moral and find it difficult to understand the other position (Greene, 2013; Haidt & Graham, 2007).
We argue that all these notions point to the socially defined and identity-affirming properties of moral guidelines and moral behaviors. Conceptions of right and wrong reflect the values that people share with important others and are anchored in the social groups to which they (hope to) belong (Ellemers, 2017; Ellemers & Van den Bos, 2012; Ellemers & Van der Toorn, 2015; Leach, Bilali, & Pagliaro, 2015).
This also implies that there is no inherent moral value in specific actions or overt displays, for instance, of empathy or helping. Instead, the same behaviors can acquire different moral meanings, depending on the social context in which they are displayed and the relations between actors and targets involved in this context (Blasi, 1980; Gray, Young, & Waytz, 2012; Kagan, 2018; Reeder & Spores, 1983).
Thus, a first question to be answered when reviewing the empirical literature, therefore, is whether and how the socially shared and identity relevant nature of moral guidelines—central to key theoretical approaches—is adressed in the studies conducted to examine human morality.
Conceptions of the Moral Self
A second principle that is needed to understand human morality—and expands evolutionary and biological approaches—is rooted in the explicit self-awareness and autobiographical narratives that characterize human self-consciousness, and moral self-views in particular (Hofmann, Wisneski, Brandt, & Skitka, 2014).
Because of the far-reaching implications of moral failures, people are highly motivated to protect their self-views of being a moral person (Pagliaro, Ellemers, Barreto, & Di Cesare, 2016; Van Nunspeet, Derks, Ellemers, & Nieuwenhuis, 2015). They try to escape self-condemnation, even when they fail to live up to their own moral standards. Different strategies have been identified that allow individuals to disengage their self-views from morally questionable actions (Bandura, 1999; Bandura, Barbaranelli, Caprara, & Pastorelli, 1996; Mazar, Amir, & Ariely, 2008).
The impact of moral lapses or moral transgressions on one’s self-image can be averted by redefining one’s behavior, averting responsibility for what happened, disregarding the impact on others, or excluding others from the right to moral treatment, to name just a few possibilities.
A key point to note here is that such attempts to protect moral self-views are not only driven by the external image people wish to portray toward others. Importantly, the conviction that one qualifies as a moral person also matters for internalized conceptions of the moral self (Aquino & Reed, 2002; Reed & Aquino, 2003).
This can prompt people, for instance, to forget moral rules they did not adhere to (Shu & Gino, 2012), to fail to recall their moral transgressions (Mulder & Aquino, 2013; Tenbrunsel, Diekmann, Wade-Benzoni, & Bazerman, 2010), or to disregard others whose behavior seems morally superior (Jordan & Monin, 2008).
As a result, the strong desire to think of oneself as a moral person not only enhances people’s efforts to display moral behavior (Ellemers, 2018; Van Nunspeet, Ellemers, & Derks, 2015). Instead, sadly, it can also prompt individuals to engage in symbolic acts to distance themselves from moral transgressions (Zhong & Liljenquist, 2006) or even makes them relax their behavioral standards once they have demonstrated their moral intentions (Monin & Miller, 2001).
Thus, tendencies for self-reflection, self-consistency, and self-justification are both affected by and guide moral behavior, prompting people to adjust their moral reasoning as well as their judgments of others and to endorse moral arguments and explanations that help justify their own past behavior and affirm their worldviews (Haidt, 2001).
A second important question to consider when reviewing the empirical literature on morality, thus, is whether and how studies take into account these self-reflective mechanisms in the development of people’s moral self-views. From a theoretical perspective, it is therefore relevant to examine antecedents and correlates of tendencies to engage in self-defensive and self-justifying responses. From an empirical perspective, it also implies that it is important to consider the possibility that people’s self-reported dispositions and stated intentions may not accurately indicate or predict the moral behavior they display.
The Interplay Between Thoughts and Experiences
A third principle that connects different theoretical perspectives on human morality is the realization that this involves deliberate thoughts and ideals about right and wrong, as well as behavioral realities and emotional experiences people have, for instance, when they consider that important moral guidelines are transgressed by themselves or by others.
Traditionally, theoretical approaches in moral psychology were based on the philosophical reasoning that is also reflected in legal and political scholarship on morality. Here, the focus is on general moral principles, abstract ideals, and deliberate decisions that are derived from the consideration of formal rules and their implications (Kohlberg, 1971; Turiel, 2006). Over the years, this perspective has begun to shift, starting with the observation made by Blasi (1980, p. 1) that
Few would disagree that morality ultimately lies in action and that the study of moral development should use action as the final criterion. But also few would limit the moral phenomenon to objectively observable behavior. Moral action is seen, implicitly or explicitly, as complex, imbedded in a variety of feelings, questions, doubts, judgments, and decisions . . . . From this perspective, the study of the relations between moral cognition and moral action is of primary importance.
This perspective became more influential as a result of Haidt’s (2001) introduction of “moral intuition” as a relevant construct. Questions about what comes first, reasoning or intuition, have yielded evidence showing that both are possible (e.g., Feinberg, Willer, Antonenko, & John, 2012; Pizarro, Uhlmann, & Bloom, 2003; Saltzstein & Kasachkoff, 2004).
That is, reasoning can inform and shape moral intuition (the classic philosophical notion), but intuitive behaviors can also be justified with post hoc reasoning (Haidt’s position). The important conclusion from this debate thus seems to be that it is the interplay between deliberate thinking and intuitive knowing that shapes moral guidelines (Haidt, 2001, 2003, 2004). This points to the importance of behavioral realities and emotional experiences to understand how people reflect on general principles and moral ideals.
A first way in which this has been addressed resonates with the evolutionary survival value of moral guidelines to help avoid illness and contamination as sources of physical harm. In this context, it has been argued and shown that nonverbal displays of disgust and physical distancing can emerge as unthinking embodied experiences to morally aversive situations that may subsequently invite individuals to reason why similar situations should be avoided in the future (Schnall, Haidt, Clore, & Jordan, 2008; Tapp & Occhipinti, 2016).
The social origins of moral guidelines are acknowledged in approaches explaining the role of distress and empathy as implicit cues that can prompt individuals to decide which others are worthy of prosocial behavior (Eisenberg, 2000). In a similar vein, the experience of moral anger and outrage at others who violate important guidelines is seen as indicating which guidelines are morally “sacred” (Tetlock, 2003).
Experiences of disgust, empathy, and outrage all indicate relatively basic affective states that are marked with nonverbal displays and have direct implications for subsequent actions (Ekman, 1989; Ekman, 1992).
In addition, theoretical developments in moral psychology have identified the experience of guilt and shame as characteristic “moral” emotions. Compared with “primary” affective responses, these “secondary” emotions are used to indicate more complex, self-conscious states that are not immediately visible in nonverbal displays (Tangney & Dearing, 2002; Tangney, Stuewig, & Mashek, 2007).
These moral emotions are seen to distinguish humans from most animals. Indeed, affording to others the perceived ability to experience such emotions communicates the degree to which we consider them to be human and worthy of moral treatment (Haslam & Loughnan, 2014). The nature of guilt and shame as “self-condemning” moral emotions indicates their function to inform self-views and guide behavioral adaptations rather than communicating one’s state to others.
At the same time, it has been noted that feelings of guilt and shame can be so overwhelming that they raise self-defensive responses that stand in the way of behavioral improvement (Giner-Sorolla, 2012). This can occur at the individual level as well as the group level, where the experience of “collective guilt” has been found to prevent intergroup reconciliation attempts (Branscombe & Doosje, 2004).
Accordingly, it has been noted that the relations between the experience of guilt and shame as moral emotions and their behavioral implications depend very much on further appraisals relating to the likelihood of social rejection and self-improvement that guide self-forgiveness (Leach, 2017).
Regardless of which emotions they focus on, these theoretical perspectives all emphasize that moral concerns and moral decisions arise from situational realities, characterized by people’s experiences and the (moral) emotions these evoke. A third question emerging from theoretical accounts aiming to understand human morality, therefore, is whether and how the interplay between the thoughts people have about moral ideals (captured in principles, judgments, reasoning), on one hand, and the realities they experience (embodied behaviors, emotions), on the other, is explicitly addressed in empirical studies.
Understanding Moral Behavior
Our conclusion so far is that researchers in social psychology have displayed a considerable interest in examining topics relating to morality. However, it is not self-evident how the multitude of research topics and issues that are addressed in this literature can be organized. This is why we set out to organize the available research in this area into a limited set of meaningful categories by content-analyzing the publications we found to identify studies examining similar research questions.
In the “Method” section, we provide a detailed explanation of the procedure and criteria we used to develop our coding scheme and to classify studies as relating to one of five research themes we extracted in this way. We now consider the nature of the research questions addressed within each of these themes and the rationales typically provided to study them, to specify how different research questions that are examined are seen to relate to each other. We visualize these hypothesized relations in Figure 1.
Researchers in this literature commonly cite the ambition to predict, explain, and influence Moral Behavior as their focal guideline for having an interest in examining some aspect of morality (see also Ellemers, 2017). We therefore place research questions relating to this theme at the center of Figure 1. Questions about behavioral displays that convey the moral tendencies of individuals or groups fall under this research theme. These include research questions that address implicit indicators of moral preferences or cooperative choices, as well as more deliberate displays of helping, cheating, or standing up for one’s principles.
Many researchers claim to address the likely antecedents of such moral behaviors that are located in the individual as well as in the (social) environment. Here, we include research questions relating to Moral Reasoning, which can reflect the application of abstract moral principles as well as specific life experiences or religious and political identities that people use to locate themselves in the world (e.g., Cushman, 2013). This work addresses moral standards people can adhere to, for instance, in the decision guidelines they adopt or in the way they respond to moral dilemmas or evaluate specific scenarios.
We classify research questions as referring to Moral Judgments when these address the dispositions and behaviors of other individuals, groups, or companies in terms of their morality. These are considered as relevant indicators of the reasons why and conditions under which people are likely to display moral behavior. Research questions addressed under this theme consider the characteristics and actions of other individuals and groups as examples of behavior to be followed or avoided or as a source of information to extract social norms and guidelines for one’s own behavior (e.g., Weiner, Osborne, & Rudolph, 2011).
We distinguish between these two clusters to be able to separate questions addressing the process of moral reasoning (to infer relevant decision rules) from questions relating to the outcome in the form of moral judgments (of the actions and character of others). However, the connecting arrow in Figure 1 indicates that these two types of research questions are often discussed in relation to each other, in line with Haidt’s (2001) reasoning that these are interrelated mechanisms and that moral decision rules can prescribe how certain individuals should be judged, just as person judgments can determine which decision rules are relevant in interacting with them.
We proceed by considering research questions that relate to the psychological implications of moral behavior. The immediate affective implications of one’s behavior, and how this reveals one’s moral reasoning as well as one’s judgments of others, are addressed in questions relating to Moral Emotions (Sheikh, 2014). These are the emotional responses that are seen to characterize moral situations and are commonly used to diagnose the moral implications of different events. Questions we classified under this research theme typically address feelings of guilt and shame that people experience with regard to their own behavior, or outrage and disgust in response to the moral transgressions of others.
Finally, we consider research questions addressing self-reflective and self-justifying tendencies associated with moral behavior. Studies aiming to investigate the moral virtue people afford to themselves and the groups they belong to, and the mechanisms they use for moral self-protection, are relevant for Moral Self-Views. Under this research theme, we subsume research questions that address the mechanisms people use to maintain self-consistency and think of themselves as moral persons, even when they realize that their behavior is not in line with their moral principles (see also Bandura, 1999).
Even though research questions often consider moral emotions and moral self-views as outcomes of moral behaviors and theorize about the factors preceding these behaviors, this does not imply that emotions and self-views are seen as the final end-states in this process.
Instead, many publications refer to these mechanisms of interest as being iterative and assume that prior behaviors, emotions, and self-views also define the feedback cycles that help shape and develop subsequent reasoning and judgments of (self-relevant) others, which are important for future behavior. The feedback arrows in Figure 1 indicate this.
Our main goal in specifying how different types of research questions can be organized according to their thematic focus in this way is to offer a structure that can help monitor and compare the empirical approaches that are typically used to advance existing insights into different areas of interest. The relations depicted in Figure 1 represent the reasoning commonly provided to motivate the interest in different types of research questions.
The location of the different themes in this figure clarifies how these are commonly seen to connect to each other and visualizes the (sometimes implicit) assumptions made about the way findings from different studies might be combined and should lead to cumulative insights. In the sections that follow, we will examine the empirical approaches used to address each of these clusters of research questions to specify the ways in which results from different types of studies actually complement each other and to identify remaining gaps in the empirical literature.
reference link : https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6791030/
Original Research: Open access.
“Losing sense of fairness: How information about a level playing field reduces selfish behavior” by Kajsa Hansson, Emil Persson, Shai Davidai, Gustav Tinghög. Journal of Economic Behavior and Organization