Context-based decision making – How does the brain work ?


When crossing the street, which way do you first turn your head to check for oncoming traffic? This decision depends on the context of where you are.

A pedestrian in the United States looks to the left for cars, but one in the United Kingdom looks right.

A group of scientists at Columbia’s Zuckerman Institute has been studying how animals use context when making decisions. And now, their latest research findings have tied this ability to an unexpected brain region in mice: an area called the anterior lateral motor cortex, or ALM, previously thought to primarily guide and plan movement.

This discovery, published today in Neuron, lends new insight into the brain’s remarkable ability to make decisions.

Flexible decision making is a critical tool for making sense of our surroundings; it allows us to have different reactions to the same information by taking context into account.

“Context-dependent decision-making is a building block of higher cognitive function in humans,” said neuroscientist Michael Shadlen, MD, PhD, the paper’s co-senior author with Richard Axel, MD.

“Observing this process in a motor area of the mouse brain, as we did with today’s study, puts us a step closer to understanding cognitive function at the level of brain cells and circuits.”

“If someone is standing uncomfortably close to me on a deserted street, I may try to run away, but if the same event occurred on a crowded subway car, I would feel no such danger,” said neuroscientist and first author Zheng (Herbert) Wu, PhD.

“My decision to move or not move is dependent on the context of where I am; thus giving a reason behind the choices I make.”

To investigate how the brain achieves this context-dependent flexibility, the team surveyed several brain areas dedicated to processing and integrating sensory information, but found the critical area to be a part of the motor cortex called the ALM.

Previous experiments suggested that the ALM has a relatively simple job: It guides movements of a mouse’s tongue and facial muscles.

To investigate how the brain achieves this context-dependent flexibility, the team surveyed several brain areas dedicated to processing and integrating sensory information, but found the critical area to be a part of the motor cortex called the ALM.

Previous experiments suggested that the ALM has a relatively simple job: It guides movements of a mouse’s tongue and facial muscles.

Building on this understanding, the researchers designed a new experiment that required mice to make flexible decisions using their tongues and their olfactory system, which guides their sense of smell. In the experiment, a mouse first encountered a single odor.

The mouse had to remember this odor, because after a brief pause, the researchers then puffed a second odor over the nostrils of the mouse. If both odors were the same, the mouse had to lick a tube to the left to get water. If the two odors were different, it had to lick a tube to the right.

Previous work on this type of “delayed match to sample” test would lead one to expect that the mouse would use brain areas dedicated to odor perception to make the decision about which way to lick. Recordings of brain activity from these areas seemed to confirm this mechanism.

“Based on these recordings, one could imagine that these brain areas have the answer when the mouse receives the second odor,” said Dr. Shadlen.

“All that’s left to do is pass that answer to the brain’s motor system to produce the appropriate lick response to the left or right.”

If this were so, then the motor area should not a play a role until the second odor is provided, and the mouse decides whether the two odors are the same or different. Dr. Wu devised a clever way to test this prediction. He switched off the animals’ ALM until just before the second odor was given, turning ALM back on in time for the mice to receive the answer.

“According to the standard view, the mice should have been unfazed by this manipulation, as their olfactory system remained intact,” said Dr. Shadlen. “Instead they were impaired on the task.”

“Our results suggest that the ALM was required to solve the question of whether the two odors were a match and then to decide where to lick, prompting us to significantly rethink what the brain was doing to make these decisions,” said Dr. Wu.

ALM was not known to be involved in odor perception. Dr. Wu therefore took a closer look at the brain cells in ALM. He discovered a new type of neuron in ALM very near the surface of the brain that responds to the first odor. It keeps that information handy until the second odor is received.

To explore this unexpected result, the research team turned to theoretical neuroscientist Ashok Litwin-Kumar, PhD, to investigate a variety of potential mechanisms that could account for ALM’s role.

“Conventional wisdom held that the animals’ olfactory brain region should handle scent processing on its own, and then feed information to the ALM, which would then guide the tongue,” said Dr. Litwin-Kumar.

“But the data told us a different story; the first odor acts as a contextual clue, priming the ALM to then indicate that relationship by deciding which way to lick in response to the second odor.”

Today’s findings, while focused on the ALM, are important for how they can inform scientists’ larger understanding of brain function as a whole.

“Ultimately, we want to elucidate fundamental principles that explain simple behaviors, but that writ large provide insight into higher cognitive function in humans,” said Dr. Shadlen. “An essential step toward that goal is to knit together knowledge about neurons, circuits and behavior using the languages of biology and mathematics. This collaborative project highlights the promise of this strategy.”

Co-senior author Michael Shadlen, MD, PhD, is a principal investigator at Columbia’s Mortimer B. Zuckerman Mind Brain Behavior Institute, professor of neuroscience at Columbia’s Vagelos College of Physicians and Surgeons, an Investigator at the Howard Hughes Medical Institute and a member of the Kavli Institute for Brain Science.

Co-senior author Richard Axel, MD, is University Professor and Nobel Laureate, codirector of Columbia’s Zuckerman Institute, professor of pathology and biochemistry at Columbia’s Vagelos College of Physicians and Surgeons, an Investigator at the Howard Hughes Medical Institute and a member of the Kavli Institute for Brain Science.

First author Zheng (Herbert) Wu, PhD, is a postdoctoral research scientist jointly in the labs of Michael Shadlen and Richard Axel at Columbia’s Zuckerman Institute.

Co-author Ashok Litwin-Kumar, PhD, is a principal investigator at Columbia’s Zuckerman Institute, an assistant professor of neuroscience at Columbia’s Vagelos College of Physicians and Surgeons and a member of Columbia’s Center for Theoretical Neuroscience.

This paper is titled “Context-dependent decision making in a premotor circuit.” Additional contributors include Philip Shamash and Alexei Taylor.

Funding: This research was supported by the National Science Foundation Next Generation Networks for Neuroscience (NeuroNex) Award (DBI-1707398), the Burroughs Wellcome Foundation, the Gatsby Charitable Foundation, the Howard Hughes Medical Institute and the Simons Foundation.

Voluntary action depends on our capacity to learn how our actions relate to specific events in the external world, and use this knowledge to guide our decisions. Research on value-based decision-making has additionally revealed that the costs associated with specific actions, such as physical [1,2] or mental [3,4] effort, are weighed against their expected rewards [5,6].

In other words, when deciding whether to go out for dinner at a sushi or pizza restaurant, we consider not only how much we like either restaurant, but also how far we need to travel to reach them.

Importantly, navigating the external world requires continuously monitoring our decisions, actions, and their consequences, to detect potential difficulties, or unexpected events, that may arise, in order to adapt our behaviour accordingly.

Returning to the dinner example, imagine you decide to go to the sushi restaurant but, as you step out of the house, you are faced with the smell of pizza from a new nearby restaurant.

This will trigger a conflict between your previous plan to have sushi and the tempting smell of pizza, and may lead you to re-evaluate your decision.

Research on conflict monitoring has shown that detecting conflicts between competing response options leads to the recruitment of cognitive control resources [7,8]. Cognitive control serves to resolve conflict online, for example, by suppressing inappropriate motor activations [9,10], or enhancing attention to task-relevant information [11,12], while sustained adjustments following conflict can help reduce subsequent conflict effects [1315].

Therefore, cognitive control can be deployed proactively, to prevent or minimise conflict effects, as well as reactively, to resolve conflict once it is detected [16]. However, as engaging cognitive control is effortful, conflict is typically considered aversive [1719].

When given a choice, people tend to avoid cognitively demanding tasks [20], such as high conflict tasks [21] and contexts [2224].

Relatedly, many studies have shown that free choices can be biased by external stimuli, whether consciously [25,26] or unconsciously [2733] perceived. Note that by “free choice” we refer to situations in which the context allows choosing between alternative response options, typically based on internally generated information (e.g. without a reason, based on learned values…).

That is contrasted with “instructed” or “forced” choice trials, which are used to refer to situations in which there is only one response option available, i.e. stimulus-driven responses, wherein external information determines the required response given the known rules of the task (e.g. a rightward target arrow requires a right key press).

In the aforementioned studies, participants were asked to choose between response alternatives, e.g. pressing a left vs. right button, yet, participants had no particular motivation to pick one action over the other, as they had similar, or no, consequences.

Such free choice scenarios have been associated with higher activity in the dorsal anterior cingulate cortex (dACC) than when following instructions [34], with dACC activity also increasing when facing conflict with external stimuli [35,36], in both free and instructed trials [32].

In fact, choosing between indifferent options, or “underdetermined responding” [35], can itself be seen to constitute a type of conflict. When there are no outcomes to motivate the choice (e.g. [32]), or when choice alternatives have similar expected values [3742], competing responses will be similarly activated, i.e. there will be response conflict.

This will require the recruitment of further cognitive resources to break the tie. Importantly, although free choices offer an opportunity to use only internal information to guide choice, the presence of additional external inputs can trigger activation of one response, e.g. by a left vs. right pointing arrow.

Therefore, this context would require the proactive recruitment of cognitive control to prioritise the use of internal and relevant information, by suppressing external distraction, resolving conflict between internal and external information, or delaying the decision.

Given the typical tendency to minimise effort and cognitive control engagement, following an external suggestion might then serve to facilitate decision-making. In other words, making use of any available information to guide such inconsequential decisions would serve to avoid unjustified cognitive demands.

The choice bias effect could thus be understood as reflecting a similar drive as conflict avoidance, i.e. avoiding cognitive control engagement. Yet, it might seem less clear whether motivated, value-based, decision-making would be similarly influenced by irrelevant, conscious, external stimuli.

Although the fields of value-based decision-making and conflict monitoring have historically remained largely separate, recent work has started to bridge this gap [7,8,20,36]. For example, Shenhav and colleagues’ [43]

Expected Value of Control (EVC) model proposes that the allocation of cognitive control depends on trading off the expected value (i.e. rewards) of control engagement against the amount of control required and its associated cognitive effort costs.

In line with this account, cognitive demand avoidance can be modulated by task incentives, and interindividual variability in cognitive control efficiency [20]. Therefore, similarly to how value-based choices are used to infer the subjective (i.e. idiosyncratic) value associated with the choice alternatives, observing the degree to which one’s choices avoid cognitive control demand can be used to infer the subjective costs associated with exerting cognitive control. Shenhav and colleagues’ further proposed that the dACC is a key brain region involved in this cost-benefit analysis [43].

Other neuro-computational models [4446] have also implicated the dACC in the recruitment of cognitive control resources, monitoring conflict, cognitive and physical effort, difficulty, surprise, or errors, as well as in computing cost-benefit trade-offs that guide the allocation of control.

Neuroimaging studies have shown that dACC encodes both mental [3,5] and physical [1,6,47] effort costs during value-based decision-making. Conflict monitoring has also long been associated with dACC activity [8,32,48], further supporting parallels between effort and conflict costs in decision-making [43].

In fact, a recent study showed that interindividual variability in conflict cost was related to its impact on risky decision-making [49]. Following other authors [8,49,50], we will hereafter refer to conflict costs as a shorthand for the aversiveness of the cognitive control demands entailed by conflict situations, including the suppression of irrelevant information and conflict resolution.

Rational, normative accounts of decision-making [51] would predict that decisions with important consequences (e.g. rewards) should motivate us to rely only on relevant, internal information (learned reward expectations), and successfully ignore irrelevant information.

Yet, the aforementioned perspective that value-based decisions involve cost-benefit trade-offs would predict that irrelevant information can bias decisions whenever the expected rewards do not outweigh the expected cognitive control costs involved in supressing the irrelevant information.

Furthermore, recent work has shown that the competition between top-down vs. bottom-up signals, such as motivation vs. salience, can influence rapid attentional allocation, thus resulting in biases in value-based decisions induced by irrelevant, bottom-up (salience), information [5255].

That work shows that choice biases can arise from the integration of information from different sources, given the choice context and input, leading to facilitation of a given response by irrelevant information, rather than invoking a role for conflict management (cf. [54]).

While not necessarily being inconsistent with such facilitation mechanisms, the perspective that people are motivated to avoid cognitive demands can offer an explanation as to why the recruitment of cognitive control resources is not enhanced to prevent such biases in the first place, e.g. proactively, whenever the expected control costs seem unjustified by the expected benefits.

Notably, this perspective also sheds light on the observation that, although free choices are typically preferred over no choice, the difficulty of the choice context, such as when making choices under uncertainty, or when having many options to consider (aka. choice overload [56]), can render free choice undesirable [57], and reduce the subjective freedom experienced [58].

Therefore, our study aimed to further investigate this cost-benefit trade-off by investigating whether biases in free choices induced by conscious distractors (flankers) would be evident in a value-based context, similarly to had been previously observed for “indifferent” choices [26].

Independently of how conflict costs factor into our decision-making, experiencing conflict during a decision could also alter how we learn about action-outcomes associations, i.e. instrumental learning.

The aversive nature of conflict has been shown to influence the processing of action outcomes. Conflicts can lead to a more negative evaluation of neutral stimuli [59,60], and a reduction in perceived control over action outcomes [61,62]. In line with findings on effort discounting [4,6], a recent study showed that response conflict may carry an implicit cost to obtained rewards [50].

Using the Simon task [63], Cavanagh and colleagues showed that participants preferred cue stimuli associated with rewards that followed non-conflicted trials, over stimuli associated with rewards that followed conflicted trials.

Importantly, during the learning phase of that study, participants could not choose what to do (i.e. they had to follow an instruction in the stimulus), hence could not make an action that did not trigger conflict.

Yet, from the perspective that the allocation of cognitive control depends on cost-benefit analyses, in a free choice scenario in which an available response option could serve to avoid or minimise conflict, e.g. by choosing an easier task or response, choosing the option that does entail conflict would likely be motivated by lower conflict costs or higher reward expectations.

In line with a moderating role for freedom of choice for conflict costs, choosing freely to do a cognitively demanding task (high conflict probability) was linked to greater striatum activity than when choosing the easy task, implying an intrinsic motivation that offset the cognitive control costs, whereas striatum activity patterns were reversed when having to follow instructions [22].

Therefore, it remains unclear whether conflict costs would still influence learning when participants could have made a choice that would not entail conflict.

Finally, in addition to experiencing conflicts between internal and external information–akin to “pizza smell” example above, we can also experience conflicts between competing internal motivations–e.g. preferring sushi, but also wanting to please a friend who asks to have pizza.

Interestingly, it has been shown that motivational conflicts, such as between Pavlovian biases and instrumental task requirements, can impair instrumental learning [64,65]. This work shows that it is difficult to learn to act to avoid punishments, as it goes against the Pavlovian tendency of withholding action to avoid punishments.

The competing motivations will thus activate competing response options, requiring cognitive control to suppress the inappropriate Pavlovian bias [66]. Despite the differences in the underlying sources of conflict, common neural signals have been implicated in monitoring externally-triggered and motivational conflicts [66,67] (i.e. mid-frontal theta band oscillations, in turn thought to be linked to ACC [68]).

These findings further support the hypothesis that conflict costs could alter learning. Nonetheless, it remains possible that the precise nature of the conflict experienced–between internal vs. external information, or between competing internal motivations–could be a relevant moderator of its effects on learning.

The present study aimed to investigate the following two key questions: a) whether value-based decisions could be influenced by irrelevant distractors; b) whether experiencing conflict might influence instrumental learning.

Additionally, we assessed the role of two potential moderators of how learning might be influenced by conflict:

i) the type of conflict experienced–with external information, or between internal motivations;

ii) choice freedom, since having the possibility to make choices that could reduce conflict might alter the experience of conflict when the difficult option is chosen (in a free choice scenario), relative to when conflict is unavoidable (when following instructions). T

o test these questions, we embedded irrelevant distractors (flankers) within a reversal-learning task (Fig 1), with intermixed free and instructed trials. Participants had to continuously track whether left or right hand actions had a high or low reward probability (75/25%), and contingencies reversed unpredictably.

As the same contingencies applied in free and instructed trials, participants were told to learn equally from the outcomes of both trial types, and that not complying with instructions would reduce their final earnings. Distractors could trigger conflict with an instructed action (e.g. >><>>) or with a freely chosen action (indicated by a bidirectional target), and might bias free choices.

In this context, participants could adapt to conflict by focusing on the target and ignoring the distractors, while free choices additionally offered an opportunity for conflict avoidance. Comparing the influence of conflict on learning in free and instructed trials allowed us to assess the role of having choice in whether to act in conflict with an external suggestion.

Furthermore, as instructions were equally likely to require making the high or low reward action, participants sometimes experienced conflict between two internal motivations: correctly following an instruction (e.g. left), and following their subjective value expectations about the best action (e.g. right).

An external file that holds a picture, illustration, etc.
Object name is pcbi.1007326.g001.jpg
Fig 1
Task outline.A. Timeline of a trial. B. Task design and example mapping of actions to reward probabilities. Conflict between actions and external distractors is captured by the “distractor-action congruency” factor, where C = Congruent, and I = Incongruent. Conflict between instructions and subjective action values (model-based) is exemplified here. Assuming participants correctly learned the current contingency, right (R) would be the subjectively “high value” action (i.e. no conflict if instructed right), and left (L) would be the “low value” action (i.e. conflict if instructed left).

As brief glimpse of our findings, computational models of reinforcement learning [69,70] were adapted to test our hypotheses, and fitted to trial-by-trial choice behaviour. Model comparison showed that a model implementing a distractor bias in the decision rule outperformed a simple RL model in describing the data.

The data and the model supported our hypothesis that value-based choices could be biased by irrelevant information, as conflict costs were traded off against expected rewards. Comparing models with different learning rates showed that learning was influenced by freedom of choice, and by conflict in instructed trials, when facing a motivational conflict between the instruction and subjective action values.

Zuckerman Institute


Please enter your comment!
Please enter your name here

Questo sito usa Akismet per ridurre lo spam. Scopri come i tuoi dati vengono elaborati.