Availability Heuristic: Brain shortcuts are responsible for vaccine and mask non-compliance

0
145

If close friends and family members who contracted COVID-19 had mild cases and recovered quickly, or if they had an adverse reaction to the vaccine, your brain might convince you that you’d have the same experience.

This phenomenon, known as “availability heuristic,” is one of a handful of cognitive shortcuts, which conserve brain energy and are generally understand to be positive and beneficial. For example, an alternative route to work could save you time and fuel, or a mathematical method could aid you in solving an equation more efficiently.

However, “these cognitive shortcuts can be deadly during a pandemic,” warn Theodore Beauchaine, the William K. Warren Foundation Professor of Psychology at the University of Notre Dame, and his co-authors.

Beauchaine and his colleagues break down the cognitive shortcuts that can affect how we assess risk and decide to behave in the face of the pandemic in a recent paper in the journal Brain, Behavior, and Immunity. A second shortcut is known as “representativeness heuristic.”

When the brain relies on this cognitive shortcut, it might tell you only elderly people are at risk of contracting COVID-19, despite an abundance of empirical evidence to the contrary.

“We may ignore or fail to account for basic facts about SARS-CoV-2 and decide to engage with people who we believe are unlikely to be infected, even though we are all at risk of exposure and infection with this novel pathogen,” the researchers wrote.

Within this shortcut are two important subsets that can result in putting ourselves and others at risk. We may make erroneous assumptions via the “insensitivity to predictability” heuristic when, for example, we believe a friend who currently has COVID-19 but is only experiencing mild symptoms isn’t spreading the virus and won’t suffer long-term health consequences.

Throughout the pandemic, authorities in many communities have sought to limit social gatherings to slow the spread of the virus. When our brains use the “insensitivity to sample size” shortcut, we assume that infection rates among small gatherings is indicative of the overall population infection rate, which is false.

“In the context of infectious disease, small groups may deviate exponentially from the population infection rate given that members of small groups are non-random, often sharing social contacts and high-risk occupations,” Beauchaine and his colleagues wrote.

The “anchoring heuristic” refers to humans’ tendency to cling to initial information we receive about something, even when presented with updated information.

The authors give the example of people continuing to cite the inaccurate statement by the surgeon general early in the pandemic that masks are ineffective, despite subsequent studies that proved their effectiveness.

In the 1970s, studies conducted by Israeli psychologists Amos Tversky and Daniel Kahneman showed that everyone’s brains – even doctors and mental health professionals – take these mental shortcuts to preserve cognitive resources. They also found that extensive life experience can’t override – and might even accentuate – cognitive shortcuts.

“Education, awareness and further research on the role of heuristics in the spread of infectious disease should help to improve decision-making and reduce risky behavior during a pandemic.

To make accurate risk assessments, engage in safe behaviors and stop the spread of COVID-19, we must account for heuristics and their influence on our perceptions and behaviors,” the authors concluded.


How are decisions made?

Three major answers have been proposed:

The mind applies logic, statistics, or heuristics.

Yet these mental tools have not been treated as equals, each suited to a particular kind of problem, as we believe they should be. Rather, rules of logic and statistics have been linked to rational reasoning and heuristics linked to error-prone intuitions or even irrationality.

Since the 1970s, this oppo- sition has been entrenched in psychological research, from the heuristics-and-biases pro- gram (Tversky & Kahneman 1974) to various two-system theories of reasoning (Evans 2008). Deviations from logical or statistical principles became routinely interpreted as judgmental biases and attributed to cognitive heuristics such as “representativeness” or to an intuitive “System 1.”

The bottom line was that people often rely on heuristics, but they would be bet- ter off in terms of accuracy if they did not. As Kahneman (2003) explained in his Nobel Memorial Lecture: “Our research attempted to obtain a map of bounded rationality, by exploring the systematic biases that separate the beliefs that people have and the choices they make from the optimal beliefs and choices assumed in rational-agent models” (p. 1449).

In this research, it is assumed that the conditions for rational models hold and can thus define optimal reasoning. The “father” of bounded rationality, Simon (1989), however, asked a fundamentally different question, leading to a different research program.

Simon’s question: “How do human beings rea- son when the conditions for rationality postu- lated by the model of neoclassical economics are not met?” (p. 377)

As Simon (1979, p. 500) stressed in his No- bel Memorial Lecture, the classical model of ra- tionality requires knowledge of all the relevant alternatives, their consequences and probabilities, and a predictable world without surprises. These conditions, however, are rarely met for the problems that individuals and organizations face.

Savage (1954), known as the founder of modern Bayesian decision theory, called such perfect knowledge small worlds, to be distin- guished from large worlds. In large worlds, part of the relevant information is unknown or has to be estimated from small samples, so that the conditions for rational decision theory are not met, making it an inappropriate norm for opti- mal reasoning (Binmore 2009).

In a large world, as emphasized by both Savage and Simon, one can no longer assume that “rational” models automatically provide the correct answer. Even small deviations from the model conditions can matter. In fact, small-world theories can lead to disaster when applied to the large world, as Stiglitz (2010) noted with respect to the fi- nancial crash of 2008: “It simply wasn’t true that a world with almost perfect information was very similar to one in which there was per- fect information” (p. 243, emphasis added).

And Sorros (2009) concluded that “rational expecta- tions theory is no longer taken seriously outside academic circles” (p. 6).In recent years, research has moved beyond small worlds such as the ultimatum game and choice between monetary gambles. To test how well heuristics perform in large worlds, one needs formal models of heuristics.

Such tests are not possible as long as heuristics are only vaguely characterized by general labels, because labels cannot make the precise predictions that statistical techniques can.

When heuristics were formalized, a surpris- ing discovery was made. In a number of large worlds, simple heuristics were more accurate than standard statistical methods that have the same or more information. These results be- came known as less-is-more effects: There is an inverse-U-shaped relation between level of accuracy and amount of information, computa- tion, or time.

In other words, there is a point where more is not better, but harmful. Starting in the late 1990s, it was shown for the first time that relying on one good reason (and ignoring the rest) can lead to higher predictive accuracy than achieved by a linear multiple regression (Czerlinski et al. 1999, Gigerenzer & Goldstein 1996) and a three-layer feedforward connec- tionist network trained using the back propa- gation algorithm (Brighton 2006, Chater et al. 2003, Gigerenzer & Brighton 2009). These results put heuristics on par with standard

statistical models of “rational” cognition (see Gigerenzer 2008). Simon (1999) spoke of a “revolution in cognitive science, striking a great blow for sanity in the approach to human rationality.”

The revolution Simon referred to could not have happened without formal models and the power of modern computers. Moreover, it is a “revolution” in the original sense of the term, building on earlier demonstrations of the robust beauty of simple models.

These include Dawes & Corrigan (1974) and Einhorn & Hogarth (1975), who showed that simple equal weights predict about as well as—and sometimes better than—multiple regression with “optimal” beta weights. Their important work has not received the recognition it deserves and is not even men- tioned in standard textbooks in econometrics (Hogarth 2011).

Although the study of heuristics has been typically considered as purely descriptive, less- is-more effects open up a prescriptive role for heuristics, resulting in two research questions:

Description: Which heuristics do people use in which situations?

Prescription: When should people rely on a given heuristic rather than a complex strategy to make more accurate judgments?

WHAT IS A HEURISTIC?

The term heuristic is of Greek origin and means, “serving to find out or discover.” Ein- stein included the term in the title of his Nobel prize–winning paper from 1905 on quantum physics, indicating that the view he presented was incomplete but highly useful (Holton 1988, pp. 360–361). Max Wertheimer, who was a close friend of Einstein, and his fellow Gestalt psychologists spoke of heuristic meth- ods such as “looking around” to guide search for information.

The mathematician George Polya distinguished heuristics from analytical methods: For instance, heuristics are needed to find a proof, whereas analysis is for checking a proof. Simon and Allen Newell, a student of Polya, developed formal models of heuristics to limit large search spaces. Luce (1956), Tversky (1972), Dawes (1979), and others studied models of heuristics, such as lexicographic rules, elimination-by-aspect, and equal-weight rules.

Payne and colleagues (1993) provided evidence for the adaptive use of these and other heuristics in their seminal research. Similarly, behavioral biologists studied experimentally the rules of thumb (their term for heuristics) that animals use for choosing food sites, nest sites, or mates

(Hutchinson & Gigerenzer 2005). After an initial phase dominated by logic, researchers in artificial intelligence (AI) began to study heuristics that can solve problems that logic and probability cannot, such as NP-complete (computationally intractable) problems. While AI researchers began to study how heuristics make computers smart, psychologists in the 1970s became interested in demonstrating hu- man reasoning errors, and they used the term heuristic to explain why people make errors.

This change in the evaluation of heuristics went hand-in-hand with replacing models of heuristics by general labels, such as “availabil- ity” and, later, “affect.” Unlike in biology and AI, heuristics became tied to biases, whereas the content-free laws of logic and probability became identified with the principles of sound thinking (Kahneman 2003, Tversky & Kahneman 1974). The resulting heuristics- and-biases program has had immense influence, contributing to the emergence of behavioral economics and behavioral law and economics.

Definition

Many definitions of heuristics exist. Kahneman & Frederick (2002) proposed that a heuristic assesses a target attribute by another property (attribute substitution) that comes more readily to mind.

Shah & Oppenheimer (2008) pro- posed that all heuristics rely on effort reduction by one or more of the following: (a) examining fewer cues, (b) reducing the effort of re- trieving cue values, (c) simplifying the weight- ing of cues, (d ) integrating less information, and examining fewer alternatives. Although both attribute substitution and effort reduction are involved, attribute substitution is less specific because most inference methods, including multiple regression, entail it: An unknown cri- terion is estimated by cues. For the purpose of this review, we adopt the following definition:

A heuristic is a strategy that ignores part of the information, with the goal of making decisions more quickly, frugally, and/or accurately than more complex methods.

Let us explain the terms. Heuristics are a subset of strategies; strategies also include com- plex regression or Bayesian models. The part of the information that is ignored is covered by Shah and Oppenheimer’s list of five aspects. The goal of making judgments more quickly and frugally is consistent with the goal of effort reduction, where “frugal” is often measured by the number of cues that a heuristic searches. Of course, there is no strict dichotomy between heuristic and nonheuristic, as strategies can ignore more or less information.

The goal of making judgments more accurately by ignoring information is new. It goes beyond the classical assumption that a heuristic trades off some accuracy for less effort. Unlike the two-system models of reasoning that link heuristics to unconscious, associative, and error-prone processes, no such link is made in this review.

Every heuristic reviewed in this article can also be relied upon consciously and is defined as a rule. The amount of error it generates can be measured and compared to other strategies.

WHY HEURISTICS?

Two answers have been proposed to the ques- tion of why heuristics are useful: the accuracy- effort trade-off, and the ecological rationality of heuristics.

Accuracy-Effort Trade-Off

The classical explanation is that people save ef- fort with heuristics, but at the cost of accuracy (Payne et al. 1993, Shah & Oppenheimer 2008). In this view, humans and other animals rely on heuristics because information search and com- putation cost time and effort; heuristics trade- off some loss in accuracy for faster and more frugal cognition.

There are two interpretations of this trade- off: (a) Rational trade-offs. Not every decision is important enough to warrant spending the time to find the best course of action; thus, people choose shortcuts that save effort. The program on the adaptive decision maker (Payne et al. 1993) is built on the assumption that heuris- tics achieve a beneficial trade-off between ac- curacy and effort. Here, relying on heuristics can be rational in the sense that costs of effort are higher than the gain in accuracy. (b) Cog- nitive limitations. Capacity limitations prevent us from acting rationally and force us to rely on heuristics, which are considered a source of judgmental errors.

The accuracy-effort trade-off is regularly touted as a potentially universal law of cogni- tion. Yet the study on the hiatus heuristic il- lustrated that this assumption is not generally correct. The hiatus heuristic saves effort com- pared to the sophisticated Pareto/NBD model, but is also more accurate: a less-is-more effect.

Ecological Rationality

Less-is-more effects require a new conception of why people rely on heuristics. The study of the ecological rationality of heuristics, or strate- gies in general, is such a new framework: “A heuristic is ecologically rational to the degree that it is adapted to the structure of the envi- ronment” (Gigerenzer et al. 1999, p. 13). Smith (2003) used this definition in his Nobel lecture and generalized it from heuristics to markets and institutions. The study of ecological ra- tionality fleshes out Simon’s scissors analogy: “Human rational behavior (and the rational be- havior of all physical symbol systems) is shaped by a scissors whose two blades are the structure of task environments and the computational ca- pabilities of the actor” (Simon 1990, p. 7). If one looks only at one blade, cognition, one cannot

understand why and when it succeeds or fails. The study of ecological rationality addresses two related questions: How does cognition ex- ploit environmental structures, and how does it deal with error?

Exploiting environmental structure. In which environments will a given heuristic suc- ceed, and in which will it fail? Environmental structures that have been identified include (Todd et al. 2011):

  1. Uncertainty: how well a criterion can be predicted.
  2. Redundancy: the correlation between cues.
  3. Sample size: number of observations (relative to number of cues).
  4. Variability in weights: the distribution of the cue weights (e.g., skewed or uniform).

For instance, heuristics that rely on only one reason, such as the hiatus heuristic and take the-best heuristic (see below), tend to succeed (relative to strategies that rely on more rea- sons) in environments with (a) moderate to high uncertainty (Hogarth & Karelaia 2007) and moderate to high redundancy (Dieckmann & Rieskamp 2007). For customer activity, uncertainty means that it is difficult to predict future purchases, and redundancy might be reflected in a high correlation between length of hiatus and spacing of previous purchases.

The study of ecological rationality results in com- parative statements of the kind “strategy X is more accurate (frugal, fast) than Y in environment E” or in quantitative relations between the performance of strategy X when the structure of an environment changes (e.g., Baucells et al. 2008, Karelaia 2006, Martignon & Hoffrage 2002). Specific findings are introduced below.

Dealing with error. In much research on reasoning, a bias typically refers to ignoring part of the information, as in the base rate fallacy. This can be captured by the equation:

Error = bias + ε,               (1)

where ε is an irreducible random error. In this  view,  if  the  bias  is  eliminated,   good inferences are obtained. In statistical theory (Geman et al. 1992), however, there are three sources of errors:

Error = bias + variance + ε,       (2)

where bias refers to a systematic deviation be- tween a model and the true state, as in Equation

To define the meaning of variance, consider 100 people who rely on the same strategy, but each one has a different sample of observations from the same population. Because of sampling error, the 100 inferences may not be the same. Across samples, bias is the difference between the mean prediction and the true state of nature, and variance is the expected squared deviation around this mean.

To illustrate, the nine-month hiatus heuristic has a bias but zero variance, because it has no free parameters to adjust to specific samples. In contrast, the Pareto/NBD model has free parameters and is likely to suffer from both variance and bias.

Variance de- creases with increasing sample size, but also with simpler strategies that have fewer free parameters (and less flexible functional forms; Pitt et al. 2002). Thus, a cognitive system needs to draw a balance between being biased and flexi- ble (variance) rather than simply trying to eliminate bias.

In the extreme, as illustrated by the nine-month hiatus, the total elimination of variance at the price of higher bias can lead to bet- ter inferences. This “bias-variance dilemma” helps to explicate the rationality of simple heuristics and how less can be more (Brighton & Gigerenzer 2008, Gigerenzer & Brighton 2009).

The study of ecological rationality is related to the view that human cognition is adapted to its past environment (Cosmides & Tooby 2006), yet it should not be confused with the biological concept of adaptation. A match between a heuristic and an environmen- tal structure does not imply that the heuristic evolved because of that environment (Hutchin- son & Gigerenzer 2005).

The distinction be- tween ecological and logical rationality is linked to that between correspondence and coher- ence (Hammond 2007), but it is not identical. If correspondence means achieving a goal in the world rather than cohering to a rule of logic, correspondence and ecological rationality refer to similar goals—although the study of the latter adds a mathematical analysis of the relation between heuristic and environment.

If correspondence, however, means that the mental representation corresponds to the world, as in a fairly accurate mental model or in Shepard’s (2001) view of the mind as a mirror reflecting the world, then ecological rational- ity is different. A heuristic is functional, not a veridical copy of the world.

Ecological rationality does not mean that all people are perfectly adapted to their environment. As Simon (1992) noted, if that were the case, one would only need to study the environ- ment to predict behavior; the study of heuristics would be obsolete.

. . . . .

SUMMARY POINTS

  1. Heuristics can be more accurate than more complex strategies even though they process less information (less-is-more effects).
  2. A heuristic is not good or bad, rational or irrational; its accuracy depends on the structure of the environment (ecological rationality).
  3. Heuristics are embodied and situated in the sense that they exploit core capacities of the brain and their success depends on the structure of the environment. They provide an alternative to stable traits, attitudes, preferences, and other internal explanations of behavior.
  4. With sufficient experience, people learn to select proper heuristics from their adaptive toolbox.
  5. Usually, the same heuristic can be used both consciously and unconsciously, for inferences and preferences, and underlies social as well as nonsocial intelligence.
  6. Decision making in organizations typically involves heuristics because the conditions for rational models rarely hold in an uncertain world.

reference link :DOI: 10.1146/annurev-psych-120709-145346 · Source: PubMed


More information: Annelise A. Madison et al, Risk assessment and heuristics: How cognitive shortcuts can fuel the spread of COVID-19, Brain, Behavior, and Immunity (2021). DOI: 10.1016/j.bbi.2021.02.023

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Questo sito usa Akismet per ridurre lo spam. Scopri come i tuoi dati vengono elaborati.