Researchers evaluate different methods to test the presence of consciousness


How can you know that any animal, other human beings, or anything that seems conscious, isn’t just faking it?

Does it enjoy an internal subjective experience, complete with sensations and emotions like hunger, joy, or sadness?

After all, the only consciousness you can know with certainty is your own.

Everything else is inference.

The nature of consciousness makes it by necessity a wholly private affair.

These questions are more than philosophical.

As intelligent digital assistants, self-driving cars and other robots start to proliferate, are these AIs actually conscious or just seem like it?

Or what about patients in comas – how can doctors know with any certainty what kind of consciousness is or is not present, and prescribe treatment accordingly?

In my work, often with psychologist Jonathan Schooler at the University of California, Santa Barbara, we’re developing a framework for thinking about the many different ways to possibly test for the presence of consciousness.

There is a small but growing field looking at how to assess the presence and even quantity of consciousness in various entities.

I’ve divided possible tests into three broad categories that I call the measurable correlates of consciousness.

You can look for brain activity that occurs at the same time as reported subjective states. Or you can look for physical actions that seem to be accompanied by subjective states.

Finally, you can look for the products of consciousness, like artwork or music, or this article I’ve written, that can be separated from the entity that created them to infer the presence – or not – of consciousness.

Neural correlates of consciousness

Over the last two decades, scientists have proposed various ways to probe cognition and consciousness in unresponsive patients.

In such cases, there aren’t any behaviors to observe or any creative products to assess.

You can check for the neural correlates of consciousness, though.

What’s physically going on in the brain?

Neuroimaging tools such as EEG, MEG, fMRI and transcranial magnetic stimulation (each with their own strengths and weaknesses), are able to provide information on activity happening within the brain even in coma and vegetative patients.

Cognitive neuroscientist Stanislas Dehaene has identified what he calls four signatures of consciousness – specific aspects of brain activity he deems necessary for normal consciousness.

He focuses on what’s known as the “P3 wave” in the dorsolateral cortex – the part of the brain behind the top of your forehead – because it seems to correlate most reliably with normal conscious states.

He also focuses on long-range synchronized electric fields between different parts of the brain as another key signature of consciousness.

In tests which look for these signals in vegetative and minimally conscious patients, Dehaene and his colleagues have successfully predicted which patients are most likely to regain more normal states of consciousness.

Sid Kouider, another cognitive neuroscientist, has examined infants in order to assess the likelihood that very young babies are conscious.

He and his team looked for specific neural signatures that go along with subjective experience in adults.

They looked specifically for a certain type of brain waves, similar to the P3 wave Dehaene focuses on, that are reliable indicators of consciousness in adults.

They found clear analogs of the P3 wave in the brains of babies as young as five months old. Kouider concludes – unsurprisingly – that even young babies are very likely conscious in various complex ways, such as recognizing faces.

Behavioral correlates of consciousness

When considering potentially conscious entities that can’t communicate directly, and that won’t allow neuroscientific measurement tools on their head (if they even have heads), it’s possible to consider physical behaviors as clues for the presence and type of consciousness.

You know that a massive range of human behaviors are accompanied by conscious experience.

So when you see similar behaviors in other animals or even non-animals, can you reasonably infer the presence of consciousness?

For example, are cats conscious?

Their brain architecture is a little different than humans’.

They have a very minimal prefrontal cortex, which some scientists think is the center of many higher-order activities of the human brain.

But is a prefrontal cortex necessary for consciousness?

Cat behavior is complex and pretty easy to map onto human behavior in many ways.

Cats purr, flex their toes and snuggle when petted, in similar ways to people demonstrating pleasure when physically stimulated – minus the purrs, of course.

They meow loudly for food when hungry and stop meowing when fed.

They demonstrate curiosity or fear about other cats or humans with various types of body language.

These and many other easily observable behaviors add up to convincing evidence for most people that cats are indeed conscious and have rich emotional lives.

You can imagine looking for other familiar behaviors in a rat, or an ant or a plant – if you see things close enough to what you’d expect in conscious humans, you may credit the observed creature with a certain type of consciousness.

Creative correlates of consciousness

If for whatever reason, you can’t examine neural or behavioral correlates of consciousness, maybe you can look to creative outputs for clues that would indicate consciousness.

For example, when examining ancient megalithic structures such as Stonehenge, or cave paintings created as far back as 65,000 years ago, is it reasonable to assume that their creators were conscious in ways similar to us?

Most people would likely say yes. You know from experience that it would take high intelligence and consciousness to produce such items today, so reasonably conclude that our ancient ancestors had similar levels of consciousness.

What if explorers find obviously unnatural artifacts on Mars or elsewhere in the solar system?

It will depend on the artifacts in question, but if astronauts were to find anything remotely similar to human dwellings or machinery that was clearly not human in origin, it would be reasonable to infer that the creators of these artifacts were also conscious.

Closer to home, artificial intelligence has produced some pretty impressive art – impressive enough to fetch over US$400,000 in a recent art auction.

At what point do reasonable people conclude that creating art requires consciousness?

This shows a head with a thought bubble

Researchers have ideas how to probe consciousness in another. The image is in the public domain.

Researchers could conduct a kind of “artistic Turing Test”: ask study participants to consider various artworks and say which ones they conclude were probably created by a human.

If AI artwork consistently fools people into thinking it was made by a person, is that good evidence to conclude that the AI is at least in some ways conscious?

So far AIs’ aren’t convincing most observers, but it’s reasonable to expect that they will be able to in the future.

Where’s my ‘consciousness-ometer’?

Can anyone get a definitive answer about the presence of consciousness, and how much?

Unfortunately, the answer to both questions is no.

There is not yet a “consciousness-ometer,” but various researchers, including Dehaene, have some ideas.

Neuroscientist Giulio Tononi and his colleagues like Christof Koch focus on what they call “integrated information” as a measure of consciousness.

This theory suggests that anything that integrates at least one bit of information has at least a tiny amount of consciousness.

A light diode, for example, contains just one bit of information and thus has a very limited type of consciousness.

With just two possible states, on or off, however, it’s a rather uninteresting kind of consciousness.

In my work, my collaborators and I share this “panpsychist” foundation.

We accept as a working hypothesis that any physical system has some associated consciousness, however small it may be in the vast majority of cases.

Rather than integrated information as the key measure of consciousness, however, we focus on resonance and synchronization and the degree to which parts of a whole resonate at the same or similar frequencies.

Resonance in the case of the human brain generally means shared electric field oscillation rates, such as gamma band synchrony (40-120 Hertz).

Our consciousness-ometer would then look at the degree of shared resonance and resulting information flows as the measure of consciousness.

Humans and other mammals enjoy a particularly rich kind of consciousness because there are many levels of pervasive shared synchronization throughout the brain, nervous system and body.

Tests for consciousness are still in their infancy.

But this field of study is undergoing a renaissance because the study of consciousness more generally has finally become a respectable scientific pursuit.

Before too long it may be possible to measure just how much consciousness is present in various entities – including in you and me.

Funding: Tam Hunt has received some funding from the Fetzer Institute

How can we measure whether and to what extent a particular sensory, motor or cognitive event is consciously experienced?

Such measurements provide the essential data on which the current and future science of consciousness depends, yet there is little consensus on how they should be made.

The problem of measuring consciousness differs from the problem of identifying unconscious processing. For instance, in subliminal perception experiments it is desirable to know whether or not a stimulus has been consciously perceived, and in implicit learning paradigms it is interesting to know whether the relationships between different consciously represented stimuli are unconsciously inferred.

Measuring consciousness, however, requires saying something about conscious level (Glossary) and conscious content beyond the zero-point of unconsciousness.

Here, we review current approaches for measuring consciousness, covering both behavioural measures and measures based on neurophysiological data. W

e outline a variety of broad theoretical positions before describing a range of measures in the context of these theories, emphasizing recent contributions. We find that potential and actual conflicts among measures suggest new experiments (Table 1); we also assess how different measures can track the graded nature of conscious experience (Table 2).

We conclude that it is only by behaving sensibly in a theoretical context that proposed measures pick themselves up by their bootstraps, both validating themselves as measures of what they say they measure and the theories involved.

Table 1

Conflicts between measuresa

Content unconscious according to:
ObjectiveStrategic controlSubjectiveWagering
Content conscious according to:ObjectiveUnconscious knowledge by Jacoby’s process dissociation procedure is ipso factoconscious by objective measures (e.g. Refs [60,72])In both subliminal perception and implicit learning, subjects often pass objective tasks while claiming to have no knowledge or showing no relation between confidence and accuracy (e.g. [22,7375])Shown in blindsight and in the Iowa gambling task by [34]
Strategic controlNot possibleSubjects can control which grammar to employ while claiming to be guessing [32] and hypnotized subjects can engage in strategic control while reporting no awareness [76]Not yet shown but entirely possible (see below)
SubjectiveNot possibleShown in Stroop effects – a person can report the word’s meaning but cannot control its rapid useAs yet only shown in our unpublished work – a person can report awareness but still wager indiscriminately
WageringNot possibleNot yet shown but entirely possible (see Box 2)Not yet shown but entirely possible (see Box 2)
Widespread activationCognitive control system, including prefrontal cortex, activated by objectively invisible stimuli [56]Likely for Stroop with clearly shown words [44]Shown in a ‘relative blindsight’ paradigm [31]Likely given the results with verbal subjective measures, but not yet tested
Synchronyγ synchrony persists during non-REM sleep and under anaesthesia [53]As leftAs left, also, similar levels of γ synchrony are observed during non-REM and during (reportable) REM sleep [52]As left
Complexity measuresPossible in theory (but see Φ). Not tested in practiceAs leftAs leftAs left
Content unconscious according to:
Widespread activationSynchronyComplexity measures
Content conscious according to:ObjectiveLocal neuronal activity can support discriminatory behaviour in many non-conscious organisms (e.g. nematodes and worms). In humans, at least sensory and motor cortices need to be activeUnlikely given current evidenceNot yet tested but entirely possible
Strategic controlUnlikely: strategic control probably requires activation in both perceptual and frontal regionsAs abovePossible but not tested
SubjectiveAs aboveAs aboveAs above
WageringAs aboveAs aboveAs above
Widespread activationExperimentally open. Some studies show increased long-range synchrony accompanying conscious access [48]Possible in theory. Not tested in practice
SynchronyGamma synchrony is often localized [47]As above
Complexity measuresHigh neural complexity (or Φ, or cd) probably requires widespread activity: all else being equal, larger networks will give rise to higher complexity values [17]Possible in theory. Not tested in practice

Rows indicate a measure finds the content conscious and columns indicate the measure finds the content unconscious. Entries reflect a scale according to which a particular conflict is (i) experimentally noted, (ii) not yet shown but entirely possible, (iii) experimentally open, (iv) possible but not tested, (v) unlikely given current evidence and/or theory, or (vi) not possible.

Table 2

Sensitivity to graded consciousnessa

TypeMeasurePrimary theoretical affiliationSensitivity to graded conscious levelSensitivity to graded conscious content
ObjectiveDiscrimination behaviourWDTNone (either an organism is sufficiently conscious to show choice behaviour, or it is not)The d′ value in SDT might index graded consciousness, though typically any d′ > zero is taken to imply full consciousness [1]
ObjectiveStrategic controlIntegration theoryNone (see above)None so far. Various equations developed assume that a content is either clearly conscious or unconscious (e.g. Ref. [9])
SubjectiveIntrospective reportHOTPoor and indirect; poor verbal coherence might indicate low conscious levelIntrospective reports are explicitly highly sensitive to conscious content and can indicate close mismatches between observed and reported states
SubjectiveConfidence ratingsHOTPoor and indirect; confidence might diminish with conscious levelConfidence can indicate degrees of higher-order belief
SubjectivePDWHOTPoor and indirect though various continuous measures can be usedGambling measures can indicate degrees of higher order belief (see Box 2)
Objective and subjectiveGlasgow coma scaleNoneHighNone
EEGBispectral indexNoneHighNone
EEG/MEGEarly ERP (‘awareness negativity’ [77])Localized integration [14,39]Most ERPs are attenuated by sleep and low arousal, but yet not directly tested for awareness negativitySome. Early ERPs are delayed for low-contrast stimuli [77]
EEG/MEGLate ERP (P300)Global Integration [40]P300 can be elicited during sleep though with different profile [78]Low. P300 dichotomously characterizes ‘seen’ versus ‘not seen’ trials [40]
General neuroimagingWidespread activationIntegrationImaging of consciousness-impaired patients can distinguish different conscious levels [45]Low. Access to global workspace is usually considered all-or-none [10]
SynchronyInduced γ activityIntegration (local and/or global)Synchrony is present even in non-REM sleep [53]Not tested (to our knowledge)
SynchronySSVEP (frequency ‘tag’)Global integrationAuditory frequency tag is modulated by arousal level [79]Not tested (to our knowledge)
ComplexityNeural complexityIntegrationHigh (in principle but not yet shown)Low
ComplexityInformation integration (Φ)IntegrationHigh (in principle but not yet shown)Some (in principle Φ can gauge conscious contents)
ComplexityCausal densityIntegrationHigh (in principle; shown only in our own unpublished work)Possibly revealed by causal interaction patterns but not yet shown

Conscious level can be graded on a scale from coma to full wakefulness, and conscious contents can also be graded (e.g. fringe consciousness and Ganzfeld experiences). This table indicates how different measures are able to track graded consciousness, as well as their primary theoretical affiliation.

Theories of consciousness

Worldly discrimination theory

Perhaps the simplest theory that still impacts the experimental literature is that any mental state that can express its content in behaviour is conscious; thus, a person shows they are consciously aware of a feature in the world when they can discriminate it with choice behaviour [1,2].

This theory often makes use of signal-detection theory (SDT), a statistical framework for quantifying the discriminability of a stimulus [3].

SDT itself is mute on the subject of consciousness and can, thus, be combined with different theories.

The combination of SDT with the worldly discrimination theory (WDT) asserts that continuous information available for discriminations is necessarily the content of conscious mental states.

This theory captures one property of conscious knowledge, namely that it enables choice behaviour.

However, rightly or wrongly, it does not respect other properties that are associated with consciousness.

For example, according to this theory blindsight patients see consciously because forced-choice discrimination is the result by which we infer that they can see at all.

However, two properties of blindsight indicate intuitively that the seeing is not conscious [4].

First, blindsight patients do not spontaneously attempt to use the information practically or inferentially.

Second, blindsight patients themselves think they cannot see.

Integration theories

Other theories attempt to locate a divide between conscious and unconscious processes that respect one or both of the intuitions just mentioned.

According to integration theories, conscious contents are widely available to many cognitive and/or neural processes.

This core idea has been expressed in various ways.

In philosophy, it has been described as inferential promiscuity [5], fame in the brain [6], the unified field theory [7] and global access; in cognitive psychology as broadcast within a global workspace [8] and in a more constrained way as the process dissociation framework [9]; and in neuroscience as a neuronal global workspace [10], a dynamic core [11,12], integrated information [13], and, in a more constrained way, as locally recurrent activity [14] or neuronal synchrony [15,16].

The neuroscience theories in particular have given rise to several putative measures that have been used to quantify simultaneous integration and differentiation in neural dynamics on the basis that conscious experience is also simultaneously integrated and differentiated [17].

According to these theories a mental state is conscious if it provides a sufficiently informative discrimination among a large repertoire of possible states, in which successful discrimination requires both differentiation and integration [11,12].

Higher-order thought theories

According to higher-order thought (HOT) theories, a mental state is conscious when a person is actually aware [18] or disposed toward being aware [19] of being in that state. Theories differ according to whether awareness of the mental state is achieved by perceiving it [20] or thinking about it [18].

HOT theories differ from WDTs in that it is the ability of a person to discern their mental state, rather than the state the world is in, which determines whether a mental state is conscious. In the context of the SDT, HOT theories is associated either with the criterion of standard SDT or with the second level of discrimination – discriminating not the world (as in the WDT) but the accuracy of our responses [21].

Because of their differing theoretical affiliations, measures of consciousness can, and do, conflict with each other, as detailed in Table 1.

Also, measures of consciousness not only should distinguish between conscious and unconscious processing but also indicate the degree to which an organism or a mental state is conscious [22,23]. Sensitivity to graded consciousness is described in Table 2.

All theories described so far, with the exception of some neural integration theories [11,13], describe conditions for asserting whether a particular mental state is conscious (conscious content).

They do not generally pertain to whether an organism is conscious or unconscious at a particular time (conscious level).

As we will see, measures of consciousness can, and do, address both of these issues.

Behavioural measures

‘Objective measures’ assume the ability to choose accurately under forced choice conditions as indicating a conscious mental state [24,25].

For example being able to pick which item might come next indicates conscious knowledge of regularities in sequences.

Conversely, knowledge is unconscious if a distinction in the world expresses itself only in non-intentional characteristics of behaviour (such as its speed) or in galvanic skin response, functional magnetic resonance imaging (fMRI) or other physiological characteristics not expressed in behaviour at all [26].

That is, knowledge is unconscious if it expresses itself in an indirect – but not a direct – test [27,28]. Unqualified trust in objective measures presupposes WDT and conflicts with most other measures (Table 1).

‘Strategic control’ determines the conscious status of knowledge by the person’s ability to deliberately use or not use the knowledge according to instructions.

In Jacoby’s process dissociation procedure [9], a person either tries to avoid using the information (exclusion task) or makes sure they do use it (inclusion task); any difference in influence of the stimulus between these conditions indicates conscious knowledge, and any use of it despite intentions in the exclusion condition indicates unconscious knowledge (e.g. Refs [29,30]).

Unqualified trust in this measure presupposes integration theory.

‘Subjective measures’ require subjects to report their mental states. Most simply, subjective measures have been used to ascertain whether a person knows that they know. The WDT indicates knowledge but not the awareness of knowing.

To test for awareness of knowing, confidence ratings can be given.

If for all the trials when the person says ‘guess’, the discrimination performance is still above baseline, then there is evidence that the person has knowledge that they do not know they have: unconscious knowledge by the ‘guessing criterion’ [31].

If a person’s knowledge states are conscious, they will know when they know and when they are just guessing. In this case, there should be a relationship between confidence and accuracy, indicating conscious knowledge with no relationship indicating unconscious knowledge by the ‘zero-correlation criterion’ [32,33].

Unqualified trust in subjective measures presupposes one or other of the higher-order theories.

An advantage of subjective measures is that the conscious status of a range of mental states can be assessed, including both knowledge content and phenomenal content (Box 1).

For example a blindsight patient can consciously know without consciously seeing – if they think they know but they do not think they see.

Graded degrees of conscious seeing were assessed by Overgaard et al. [22]: normal subjects consistently reported glimpses or impressions of content they were not willing to say they actually saw (Table 2).

Subjective specification of conscious content is often associated with introspection, but not all subjective reports are introspective given that introspection requires being consciously aware of being in a mental state (rather than merely being consciously aware of states in the world) [18,22].

Box 1

Structural knowledge and judgment knowledge

Tasks can involve a range of knowledge states, the conscious status of each can be assessed subjectively.

For example, when a person is exposed to a structured domain (e.g. strings from an artificial grammar), they learn about the structure (structural knowledge). Artificial grammar learning might consist of the knowledge that an ‘M’ can start a string, about whole strings that were presented and so on.

In the test phase, the structural knowledge is brought to bear on a test item to form a new piece of knowledge: the judgment, for example, that this string is grammatical (judgment knowledge) [58].

Structural knowledge can be unconscious when judgment knowledge is conscious. For example in natural language you can consciously know whether a sentence in your native tongue is grammatical or not (conscious judgment knowledge) but have no idea why you know that.

It is important to be clear whether a measure tests the conscious status of judgment or structural knowledge.

Confidence ratings and wagering measures involve confidence or wagers on a judgment, therefore the guessing and zero-correlation criteria in these cases test the conscious status of judgment knowledge only.

Similarly, Jacoby’s process dissociation procedure measures the conscious status of judgment knowledge [9]: in implicit learning tasks for example, a person can exclude effectively because they consciously know that a response satisfies structural constraints without consciously knowing what those structural constraints are (e.g. that the response forms part of a long-distance dependency, of a symmetry and so on) [59,60].

Dienes and Scott [58] introduced a simple subjective way of measuring the conscious status of structural knowledge: for each judgment in a test phase, subjects indicated whether their judgment was based on random guessing, intuition, conscious rules or memory.

Guessing and intuition prima facie indicate unconscious structural knowledge, and conscious rules and memory indicate conscious structural knowledge.

Dienes [21] argued that structural knowledge might be the interesting target for implicit learning research (insofar as it has indicated qualitative differences in knowledge), and that judgment knowledge might be the interesting target for perception research.

Post-decision wagering

Post-decision wagering (PDW) is a recent variation on confidence ratings whereby subjects make a first-order discrimination and then place a wager (rather than a confidence rating) regarding the outcome of the discrimination [34,35].

As with confidence ratings PDW presupposes a version of the HOT theory. Yet, because PDW does not ask for subjective reports, its proponents claim that it is a direct and objective measure of consciousness (see Box 2 for arguments against this claim). An advantage is that the lack of subjective reports enables the method to be used with children [35] and animals.

Box 2

Post-decision wagering: a ‘direct’ measure of awareness?

In PDW, subjects make a first-order discrimination and then place a wager on its outcome [34]. Unconscious knowledge can be shown by above-chance first-order discriminations when (i) low wagers are given (guessing criterion) or (ii) there is no relationship between wagering and accuracy (zero correlation criterion).

In one example the blindsight subject ‘GY’ classified a sensory stimulus as either present or absent, and then wagered either a small monetary stake or a large stake on the correctness of this classification. Although GY made the correct classification on ~70% of trials, he was just as likely to bet low as he was to bet high on these trials.

This absence of advantageous wagering is taken as evidence for absence of consciousness of the correctness of the first-order discrimination. Conversely, good first-order performance accompanied by advantageous wagering is taken as evidence of awareness of the first-order stimuli. Like confidence ratings, PDW requires the subject to make a ‘metacognitive’ judgment about a (putatively) conscious experience but it differs by implementing this requirement indirectly, via a wager.

As a result there can be conflicts between wagering and verbal reports that directly express HOTs (see Table 1 in the main text). Because wagering might avoid some biases affecting introspective and confidence reports (e.g. subjects can be reluctant to report weakly perceived stimuli [22]), PDW has been asserted to provide a ‘direct and objective’ measure of awareness [34]. This is a strong claim that is difficult to justify [6163]:

  • All behavioural measures, including PDW, require a response criterion: for example, whether to push a button or not (therefore claiming a ‘direct’ behavioural measure might be mistaken from the outset). Any response criterion can be affected by cognitive bias, and, for PDW, a plausible bias could arise from risk aversion. As with confidence methods, the zero correlation criterion can take account of bias but any trial-to-trial variation in bias will still undermine its sensitivity.
  • Because PDW does not ask for subjective reports, it is difficult to exclude the possibility that advantageous wagering could be learned unconsciously. This could be shown by wagering advantageously (based on unconscious judgment knowledge) while always believing that one is guessing.
  • Because PDW requires a metacognitive judgment about a putatively conscious experience, it is apparently no more ‘objective’ than a confidence judgment.

PDW highlights the interdependence of measures and theories. According to HOT theories the metacognitive nature of PDW is not problematic because some metacognitive content is constitutive of any conscious state. However, from non-HOT perspectives, the absence of wagering-related metacognitive content does not by itself imply the absence of primary (sensory) conscious content.

Finally, most behavioural measures are aimed at assessing whether particular mental content is conscious, not whether an organism is conscious. One exception is the Glasgow coma scale, a set of behavioural tests used to assess the presence, absence and depth of coma in patients with brain trauma [36]. In clinical contexts such behavioural tests are increasingly being augmented by brain-based measures of conscious level.

Brain Measures

Electroencephalegraphic measures

In 1929, Hans Berger discovered that waking consciousness is associated with low-amplitude, irregular electroencephalographic (EEG) activity in the 20–70 Hz range. It is now known that unconscious conditions such as non-REM sleep, coma, general anaesthesia and epileptic absence seizures show predominantly low-frequency, regular and high-amplitude oscillations [37,38].

Event-related cortical potentials (ERPs) have been used to assess whether a stimulus is consciously perceived or not, although there is dispute about whether early (e.g. ‘visual awareness negativity’, ~100 ms [39]) or late (e.g. the ‘P300’, ~300 ms [40]) components are most relevant. ERPs also are associated with other functions beyond consciousness per se, for example in novelty detection or recognition [41].

The proprietary ‘bispectral index’ (BIS) combines various aspects of the EEG signal to estimate anaesthetic depth (conscious level) and hence the probability of accidental waking during surgery [42]. EEG measures either float free of theory, gaining purchase through reliable correlation (e.g. BIS), or assume a version of integration theory in which the appearance of a particular ERP indicates global availability [40] or locally recurrent processing [39] (Table 2).

Widespread activation

In line with integration theories, abundant evidence indicates that consciously perceived inputs elicit widespread brain activation, as compared with inputs that do not reach consciousness [43].

For example, Dehaene and colleagues have shown in a visual masking paradigm that consciously seen stimuli activate a broad frontoparietal network compared with unseen stimuli, by using both fMRI [44] and ERP signals [40].

Neuroimaging of vegetative and minimally conscious patients also reveals stimulus-evoked activity only in sensory cortices [45].

However, differences in conscious perception are often confounded with differences in performance. Lau and Passingham [31] controlled for this confound by using a metacontrast masking paradigm and found that conscious and unconscious conditions are differentiated only by activity in the left mid-dorsolateral prefrontal cortex; widespread brain activity was found in both conditions given sufficiently accurate performance.

These results indicate that widespread activation can conflict with other measures (Table 1), although it is difficult to know whether the additional prefrontal activity is related to the generation of conscious content and/or to subjective report of that content.


Several researchers have suggested that consciousness arises from transient neuronal synchrony, possibly in the γ (30–70 Hz) [15,16] or β (~15 Hz) [46] ranges. Measuring consciousness by synchrony presupposes integration theories of at least a limited kind (to the extent that local synchrony is deemed sufficient [14,47]).

Several studies have reported an association between synchrony and consciousness, both in induced γ-range activity [47,48] and in steady-state visual–evoked potentials (SSVEP) (‘frequency tags’ [49]). However, there is not yet evidence that disruption of γ-band synchrony leads to disruption of conscious contents [50], and γ oscillations have been associated with a wide range of cognitive functions in addition to consciousness per se [51]. Moreover, γ synchrony can be present equally during REM (consciously vivid) and non-REM (dreamless) sleep states [52], and also can be high during anaesthesia [53]. Together these observations indicate that neuronal synchrony might at best be necessary but that is not sufficient for consciousness.

Complexity, information integration and causal density

Several recent measures build on the observation that conscious scenes are distinguished by being simultaneously integrated (each conscious scene is experienced ‘all of a piece’) and differentiated (each conscious scene is composed of many distinguishable components and is therefore different from every other conscious scene) [11,13,17] (Box 3).

The dynamic core hypothesis (DCH) proposes that consciousness arises from neural dynamics in the thalamocortical system with just these features, as measured by the quantity ‘neural complexity’ (CN). CN is an information-theoretic measure; the CN value is high if each subset of a neural system can take on many different states and if these states make a difference to the rest of the system [54].

Box 3

Consciousness and complexity

Three recently proposed measures – neural complexity CN [54], information integration Φ [13] and causal density cd [55] – attempt to capture the coexistence of integration and differentiation that is central to ‘complexity’ theories of consciousness (Figure I).

All these measures are defined in terms of the stationary dynamics of a neural system (X), composed of N elements. CN and Φ are based on information theory, whereas cd is based on multivariate autoregressive modelling.

The neural complexity CN(X) of X is calculated as the average mutual information (MI; a measure of statistical dependence) among subsets of all possible sizes for all bipartitions of X. This quantity is high if small subsets of X show high statistical independence but large subsets show low independence. In view of the computational expense of considering all bipartitions, CN can be approximated by considering only bipartitions of one element and the remainder of the system; another approximation derives directly from network topology [64].

Information integration Φ(X) is defined as the effective information (EI) across the ‘minimum information bipartition’ (MIB) of X, where EI is a directed version of MI that depends on stimulating one half of a bipartition with random (maximally entropic) activity and where the MIB is that bipartition for which the normalized EI is minimum, the informational ‘weakest link’ [65].

Whereas CN is a measure of actual neural activity, Φ is a measure of the capacity of a system to integrate information. Like CN, Φ is infeasible to compute for large N. It is also obviously challenging to inject arbitrary subsets of real biological systems with random activity.

Causal density cd(X) is calculated as the fraction of interactions among X’s elements that are casually significant, as identified by ‘Granger causality’ [66]. Granger causality is a statistical interpretation of causality in which x1 ‘causes’ x2 if knowing the past of x1 helps predicts the future of x2 better than knowing the past of x2 alone. It is usually calculated by linear autoregression, although nonlinear extensions exist [67]. High cd indicates that elements in X are both globally coordinated (to be useful for predicting the activities of others) and dynamically distinct (to contribute to these predictions in different ways). Like Φ but not CNcd is sensitive to causal interactions in neural dynamics. Like CN but not Φ, it reflects the activity and not the capacity of X. Like both, it is difficult to calculate for large N.

The information integration theory of consciousness (IITC) shares with the DCH the idea that conscious experiences provide informative discriminations among a vast repertoire of possible experiences [13].

In the IITC, the quantity Φ is defined as the information that is integrated across the informational ‘weakest link’ of a system. Importantly, Φ is a measure of the capacity of a neural system to integrate information, whereas CN is a measure of the actual dynamics of the system. A third measure, causal density (cd), measures the fraction of causal interactions among elements of a system that are statistically significant; it is low both for highly integrated systems and for collections of independent elements [55].

Unqualified trust in CN, Φ or cd presupposes an integration theory. This is particularly explicit for Φ because the IITC defines consciousness as information integration, implying that high Φ in any system, biological or otherwise, is sufficient for consciousness. Although all three measures are well grounded in theory, in practice they are difficult to measure, and their experimental exploration stands as an important challenge.

Measures, theories and conflicts

Theories of consciousness recommend the use of certain measures, and the use of certain measures presuppose particular theories. Just as theoretical positions conflict with one another, conflicts among measures can be expected and, in many cases, have been observed (Table 1).

These conflicts can guide further experiments and theoretical refinements. For example the extent to which PDW corresponds with other behavioural measures will shed light on whether wagering involves separate mechanisms of higher-order access, potentially indicating new aspects of HOT theories.

Regarding brain measures, results indicating the insufficiency of widespread activation [31,56] and γ synchrony [52,53] (when conscious contents are measured by subjective report) challenge basic integration theories and indicate that new insights will be uncovered by comparing these measures with those based on complexity theory.

The most informative new studies will be those that combine multiple measures, both behavioural and brain-based (Box 4).

Presently, these measures tend to pick up on different aspects of consciousness: behavioural measures are mostly used for assessing which contents are conscious, whereas at least some brain-based measures seem well suited for measuring conscious level; graded consciousness can, in principle, be assessed by either type but in different ways (Table 2). Therefore, an integrative approach combining both types of measure in a single study encourages a virtuous circularity in which putative measures and theoretical advances mutually inform, validate and refine one another.

The ultimate virtue in a measure is not its a priori toughness, but its ability to build on intuitions, identify interesting divides in nature and then correct the foundations on which it was built [57].

Box 4

Outstanding questions

  • Can the neural mechanisms underlying subjective report be dissociated from those underlying consciousness per se [14,68]?
  • Which possible conflicts between measures indicated in Table 1 (in the main text) can be demonstrated? Which measures cohere together? Under what conditions do the answers produced by a measure make theoretical sense?
  • How can multiple measures be combined to better isolate the neural mechanisms of consciousness? Can multiple measures isolate independent processes underlying conscious experience?
  • Do CN, Φ or cd behave as predicted by theory? Answering this question depends on (i) experimental methods of sufficient spatiotemporal resolution to reveal relevant details of thalamocortical activity, and (ii) sensible approximations enabling application to large neural datasets.
  • Can a theoretically principled objective measure improve on current clinical methods of diagnosing anaesthesia and impaired consciousness after brain injury?
  • How does a measure of consciousness affect what it supposedly measures? This question relates to behavioural subjective methods, especially introspection [69].
  • Which measures can be applied to infants and non-human animals and how should the results be interpreted [70,71]?
An external file that holds a picture, illustration, etc.
Object name is nihms150799f1.jpg
Figure I
Measuring integration and differentiation in neural dynamics, for a system composed of N elements. (a) CN is calculated as the sum of the average MI over N/2 sets of bipartitions indexed by k (e.g. for k = 1 an average MI is calculated over N bipartitions). (b) Φ is calculated as the EI across the MIB. To calculate EI for a given bipartition (indexed by j), one subset is injected with maximally entropic activity (orange stars) and MI across the partition is measured. (c) cd is calculated as the fraction of interactions that are causally significant according to Granger causality. A weighted (and unbounded) version of cd can be calculated as the summed magnitudes of all significant causal interactions (depicted by arrow width).

The Conversation
Media Contacts: 
Tam Hunt – The Conversation
Image Source:
The image is in the public domain.


Please enter your comment!
Please enter your name here

Questo sito usa Akismet per ridurre lo spam. Scopri come i tuoi dati vengono elaborati.