In a world first, neuroscientists from the University of Glasgow have been able to construct 3-D facial models using the unique information stored in an individual’s brain when recalling the face of a familiar person.
The study, which is published today in Nature Human Behaviour, will be the cornerstone for greater understanding of the brain mechanisms of face identification, and could have applications for AI, gaming technology and eyewitness testimony.
A team of Glasgow Scientists studied how their colleagues (14 in total) recognised the faces of four other colleagues, by determining which specific facial information they used to identify them from memory.
To test their theories the researchers had volunteers compare faces which were in all points the same – same age, gender or ethnicity – except for the information that defines the essence of their identity.
By doing so, the scientists designed a methodology which was able to ‘crack the code’ of what defines visual identity and generate it with a computer program.
The scientists then devised a method which, across many trials, led them to be able to reconstruct what information is specific to the identity of an individual in someone else’s memory.
Philippe Schyns, Professor of Visual Cognition at the Institute of Neuroscience and Psychology, said:
“It’s difficult to understand what information people store in their memory when they recognise familiar faces.
But we have developed a tool which has essentially given us a method to do just that.
“By reverse engineering the information that characterises someone’s identity, and then mathematically representing it, we were then able to render it graphically.”
The researchers designed a Generative Model of 3-D Face Identity, using a database of 355 3-D faces that described each face by its shape and texture.
They then applied linear models to the faces to be able to extract the shape and texture for non-identity factors of sex, age and ethnicity, thereby isolating a face’s unique identity information.
In the experiment, the researchers asked observers to rate the resemblance between a remembered familiar face, and randomly generated faces that shared factors of sex, age and ethnicity, but with random identity information.
To model the mental representations of these familiar faces, the researchers estimated the identity components of shape and texture from the memory of each observer.
As well as identification, the scientists were then able to use the mathematical model to generate new faces by taking the identity information unique to the familiar faces and combining it with a change their age, sex, ethnicity or a combination of those factors.
The paper, “Modelling Face Memory Reveals Task-Generalizable Representations,” is published in Nature Human Behaviour.
How Artificial Intelligence Could Reconstruct Your Memories
A team of scientists from the University of Oregon is using AI to actually take someone’s memory and almost literally “pull it out from their brains” — or at least, an image of it.
The team’s findings were recently published in The Journal of Neuroscience, detailing how the contents of encoded memories can be retrieved from the angular gyrus (ANG) in the human brain, a part of posterior lateral parietal cortex which governs a number of functions including language, number processing, spatial cognition, attention and memory retrieval.
Here’s how the multiple-part experiment was designed.
For the first part of the experiment, each of the 23 participants in the study had their brain activity scanned in an fMRI (functional magnetic resonance imaging) machine when they were shown a series of photographs, each depicting a head shot of a different person.
The fMRI then detects any changes in the flow of cerebral circulation of these participants at the moment they see these photos, and these slight variations are recorded and processed in real-time by an AI software.
Characteristics such as skin tone, eye shape and other visibly noticeable facial components were broken down into what are called eigenfaces — or vector values used in the computations underlying computer vision and facial recognition software.
“Using an approach inspired by computer vision methods for face recognition, we applied principal component analysis to a large set of face images to generate eigenfaces,” wrote the researchers.
These eigenfaces then rated within a numbering system so that it could be translated into something that the AI could parse as training data.
“We then modeled relationships between eigenface values and patterns of fMRI activity,” explained the team. “Activity patterns evoked by individual faces were then used to generate predicted eigenface values, which could be transformed into reconstructions of individual faces.”
For the second (or what might be called “mind-reading”) part of the experiment, the AI was then tested for its ability to reconstruct a new round of face photographs, using only data from participants’ recorded brain activity, culled via the fMRI machine. Based on the training data from the previous round, the AI was able to “translate” the test subjects’ neural patterns into eigenfaces that formed the basis of the reconstructed images. Here’s what came up:
It’s not incredibly accurate, yet at the same time, some eerie uncanniness might be emerging here.
In yet another test, participants were asked to recall someone’s face into their memories, which are stored and retrieved from the brain’s angular gyrus. These AI-powered reconstructions were surprisingly successful, with the AI able to draw out, for the most part, distinct qualities like gender, skin color and emotional expression.
To validate their results and to gain some insight into the inner workings of the brain, the team compared the reconstructions made by the memory-retrieving angular gyrus (ANG), versus the reconstructions made using the occipitotemporal cortex (OTC), which are sensitive to facial features.
“Strikingly, we also found that a model trained on ANG activity patterns during face perception was able to successfully reconstruct an independent set of face images that were held in memory,” wrote the researchers. “[Activity] patterns in… the angular gyrus, [support the] successful reconstruction of perceived and remembered faces, confirming a role for this region in actively representing remembered content.”
As one can see here, the so-called mind-reading capabilities of machines isn’t quite there yet.
And as the researchers point out and as the results bear out, people still have control of how their memories take shape, and reconstructed memories seen in this experiment aren’t yet enough for someone to mentally identify accurately a suspected criminal beyond the shadow of a doubt, for example.
But the technology appears to be making steps and we may eventually get to that point — someday.
More information: Jiayu Zhan et al. Modelling face memory reveals task-generalizable representations, Nature Human Behaviour (2019). DOI: 10.1038/s41562-019-0625-3
Journal information: Nature Human Behaviour
Provided by University of Glasgow