Human brains seem hardwired to see human faces where there are none


It’s so commonplace we barely give it a second thought, but human brains seem hardwired to see human faces where there are none – in objects as varied as the moon, toys, plastic bottles, tree trunks and vacuum cleaners. Some have even seen an imagined Jesus in cheese on toast.

Until now scientists haven’t understood exactly what the brain is doing when it processes visual signals and interprets them as representations of the human face.

Neuroscientists at the University of Sydney now say how our brains identify and analyze real human faces is conducted by the same cognitive processes that identify illusory faces.

“From an evolutionary perspective, it seems that the benefit of never missing a face far outweighs the errors where inanimate objects are seen as faces,” said Professor David Alais lead author of the study from the School of Psychology.

“There is a great benefit in detecting faces quickly,” he said, “but the system plays ‘fast and loose’ by applying a crude template of two eyes over a nose and mouth. Lots of things can satisfy that template and thus trigger a face detection response.”

This facial recognition response happens lightning fast in the brain: within a few hundred milliseconds.

“We know these objects are not truly faces, yet the perception of a face lingers,” Professor Alais said. “We end up with something strange: a parallel experience that it is both a compelling face and an object. Two things at once. The first impression of a face does not give way to the second perception of an object.”

This error is known as “face pareidolia”. It is such a common occurrence that we accept the notion of detecting faces in objects as ‘normal’ – but humans do not experience this cognitive process as strongly for other phenomena.

The brain has evolved specialized neural mechanisms to rapidly detect faces and it exploits the common facial structure as a short-cut for rapid detection.

“Pareidolia faces are not discarded as false detections but undergo facial expression analysis in the same way as real faces,” Professor Alais said.

Not only do we imagine faces, we analyze them and give them emotional attributes.

The findings are published today in the Proceedings of the Royal Society B.

The researchers say this expression analysis of inanimate objects is because as deeply social beings, simply detecting a face isn’t enough.

“We need to read the identity of the face and discern its expression. Are they a friend or a foe? Are they happy, sad, angry, pained?” Professor Alais said.

What the study examined was whether once a pareidolia face is detected, it is subsequently analyzed for facial expression, or discarded from face processing as a false detection.

The research shows that once a false face is retained by the brain it is analyzed for its facial expression in the same way that a real face is.

“We showed this by presenting sequences of faces and having participants rate each face’s expression on a scale ranging from angry to happy,” Professor Alais said.

What was intriguing is that a known bias in judging human faces persisted with analysis of inanimate imagined faces.

A previous study undertaken by Professor Alais showed that in a Tinder-like situation of judging face after face, a bias is observed whereby the assessment of the current face is influenced by our assessment of the previous face.

The scientists tested this by mixing up real faces with pareidolia faces – and the result was the same.

“This ‘cross-over’ condition is important as it shows the same underlying facial expression process is involved regardless of image type,” Professor Alais said.

“This means that seeing faces in clouds is more than a child’s fantasy,” he said.

“When objects look compellingly face-like, it is more than an interpretation: they really are driving your brain’s face detection network. And that scowl, or smile; that’s your brain’s facial expression system at work. For the brain, fake or real, faces are all processed the same way.”

Humans are incredibly skilled at both detecting and recognizing faces, with a significant region of cortex dedicated to face processing1. Despite this expertise, sometimes we spontaneously perceive faces where there are none—for example in inanimate objects, such as in a tree or a piece of fruit. This phenomenon, known as face pareidolia, can be conceptualized as a natural error of the face-detection system and has recently been demonstrated behaviorally in macaque monkeys2,3, suggesting that the perception of illusory faces arises from a fundamental feature of the primate face-detection system, rather than being a uniquely human trait. Despite substantial progress in uncovering the primate face processing system1,4–7 it is still not understood what constitutes a face for visual cortex, and what neural mechanism elicits errors of face detection in ordinary objects.

Here, we combine noninvasive neuroimaging tools with high temporal (MEG) and spatial (fMRI) resolution as well as behavioral ratings and model-based analyses in order to understand how illusory faces are processed in the human brain. Critical to our approach here is the use of a yoked stimulus design.

For each illusory face we found a matched object image, which was semantically and visually similar, but which did not contain an illusory face (Fig. 1). The matched set of objects facilitates examination of how the presence of an illusory face modulates the brain’s representation of an object. In terms of the spatial distribution of responses, previous findings suggest a considerable degree of abstraction in the visual selectivity of face-responsive brain regions5,6,8–11.

The sensitivity of face-selective regions to abstract faces5,8,9 suggests these regions are likely sensitive to illusory faces in inanimate objects, but it is an open question whether this sensitivity is specific to face-selective regions, or whether it is widespread throughout category-selective cortex, including regions selective to objects12,13.

This makes illusory faces in objects particularly interesting in terms of their category membership as they are perceived as both an object and as a face. Importantly, natural examples of illusory faces are visually diverse and do not require any assumptions to be made about the key features that drive the brain’s response to face stimuli. This means that illusory faces are potentially revealing about the behaviorally relevant tuning of the face-detection system.

An external file that holds a picture, illustration, etc.
Object name is 41467_2020_18325_Fig1_HTML.jpg
Fig. 1
Experimental design and analysis.
a Example visual stimuli from the set of 96 photographs used in all experiments. The set included 32 illusory faces, 32 matched objects without an illusory face, and 32 human faces. Note that the human face images used in the experiments are not shown in the figure because we do not have the rights to publish them. The original face stimuli used in the experiments are available at the Open Science Framework website for this project: The human faces shown in this figure are similar photographs taken of lab members who gave permission to publish their identifiable images. See Supplementary Fig. 1 for all 96 visual stimuli. Full resolution versions of the stimuli used in the experiment are available at the Open Science Framework website for this project: b Behavioral ratings for the 96 stimuli were collected by asking N = 20 observers on Amazon Mechanical Turk to “Rate how easily can you can see a face in this image” on a scale of 0–10. Illusory faces are rated as more face-like than matched nonface objects. Error bars are ±1 SEM. Source data are provided as a Source data file. c Event-related paradigm used for the fMRI (n = 16) and MEG (n = 22) neuroimaging experiments. In both experiments the 96 stimuli were presented in random order while brain activity was recorded. Due to the long temporal lag of the fMRI BOLD signal, the fMRI version of the experiment used a longer presentation time and longer interstimulus-intervals than the MEG version. To maintain alertness the participants’ task was to judge whether each image was tilted slightly to the left or right (3°) using a keypress (fMRI, mean = 92.5%, SD = 8.6%; MEG, mean = 93.2%, SD = 4.8%). d Method for leave-one-exemplar-out cross-decoding. A classifier was trained to discriminate between a given category pair (e.g., illusory faces and matched objects) by training on the brain activation patterns associated with all of the exemplars of each category except one, which was left out as the test data from a separate run for the classifier to predict the category label. This process was repeated across each cross-validation fold such that each exemplar had a turn as the left-out data. Accuracy was averaged across all cross-validation folds.

While understanding the spatial organization of responses to illusory faces will clarify the role of higher-level visual cortex in face perception, identifying the temporal dynamics of how illusory faces are processed is critical to understanding the origin of these face-detection errors.

Human faces are rapidly detected by the human brain12,13, but it is not known to what extent illusory face perception relies upon the same neural mechanisms. One possibility is that certain arrangements of visual features (such as a round shape) rapidly activate a basic face-detection mechanism, leading to the erroneous perception of a face.

Alternatively, illusory face perception may arise from a slower cognitive reinterpretation of visual attributes as facial features, for example as eyes or a mouth. If illusory faces are rapidly processed, it would suggest the face-detection mechanism is broadly tuned and weighted toward high sensitivity at the cost of increased false alarms. Here we exploit the high temporal resolution of MEG in order to distinguish between these alternative accounts in the human brain.

We find that face-selective regions are sensitive to the presence of an illusory face in an inanimate object, but other occipital–temporal category-selective visual regions are not. In addition to this spatially restricted response, we discover a transient and rapidly evolving response to illusory faces.

In the first couple of 100 ms, illusory faces are represented more similarly to human faces than their yoked nonface object counterparts are. However, within only 250 ms after stimulus onset, this representation shifts such that illusory faces are indistinguishable from ordinary objects. In order to enhance our understanding of what is driving this early face-like response to illusory faces, we implement a model-based analysis to compare the brain’s response to behavioral ratings of “faceness” and the output of computational models of visual features.

We find that the brain’s representation correlates earlier with the visual feature models than the behavioral model, although the behavioral model explained more variance in the brain’s response overall than the computational models. Together, our results demonstrate that that an initial erroneous face-like response to illusory faces is rapidly resolved, with the representational structure quickly stabilizing into one organized by object content rather than by face perception.

reference link :

More information: Face for radio – A shared mechanism for facial expression in human faces and face pareidolia, Proceedings of the Royal Society B (2021). rspb.royalsocietypublishing.or … .1098/rspb.2021.0966


Please enter your comment!
Please enter your name here

Questo sito usa Akismet per ridurre lo spam. Scopri come i tuoi dati vengono elaborati.