New hyper-realistic masks are so authentic that people believe they are real faces

0
2543

Some silicone masks are now so realistic they can easily be mistaken for real faces, new research suggests.

Hyper-realistic masks are made from flexible materials such as silicone and are designed to imitate real human faces – down to every last freckle, wrinkle and strand of real human hair.

In a study by the Universities of York and Kyoto, researchers asked participants to look at pairs of photographs and decide which showed a normal face and which showed a person wearing a mask.

Surprisingly, participants made the wrong call in one-in-five cases.

Everyday life

The 20% error rate observed in the study likely underestimates the extent to which people would struggle to tell an artificial face from the real thing outside of the lab, the researchers say

The researchers collected data from participants from both the UK and Japan to establish any differences according to race.

When asked to choose between photographs depicting faces of a different race to the trial participant, response times were slower and selections were 5% less accurate.

Dr Rob Jenkins from the Department of Psychology at the University of York, said: “In our study participants had several advantages over ordinary people in everyday life. We made it clear to participants that their task was to identify the mask in each pair of images and we showed them example masks before the test began.

“The real-world error rate is likely to be much higher because many people may not even be aware hyper-realistic masks exist and are unlikely to be looking out for them.”

“The current generation of masks is very realistic indeed with most people struggling to tell an artificial face from the real thing.”

Criminal cases

There are now dozens of criminal cases in which culprits have passed themselves off as people of a different age, race or gender, sending police investigations down the wrong path.

This shows the researcher holding up a mask

There are now dozens of criminal cases in which culprits have used hyper-realistic masks The image is credited to University of York.

In one recent case, an international gang used a hyper-realistic mask to impersonate a French minister, defrauding business executives out of millions of pounds.

Credit: University of York.

Dr Jet Sanders, who worked on the study while a PhD student at the University of York, said: “Failure to detect synthetic faces may have important implications for security and crime prevention as hyper-realistic masks may allow the key characteristics of a persons’ appearance to be incorrectly identified.

“These masks currently cost around £1000 each and we expect them to become more widely used as advances in manufacturing make them more affordable.”


Background

In recent years, fraudsters have begun to use readily accessible digital manipulation techniques in order to carry out face morphing attacks. By submitting a morph image (a 50/50 average of two people’s faces) for inclusion in an official document such as a passport, it might be possible that both people sufficiently resemble the morph that they are each able to use the resulting genuine ID document.

Limited research with low-quality morphs has shown that human detection rates were poor but that training methods can improve performance. Here, we investigate human and computer performance with high-quality morphs, comparable with those expected to be used by criminals.

Results

Over four experiments, we found that people were highly error-prone when detecting morphs and that training did not produce improvements. In a live matching task, morphs were accepted at levels suggesting they represent a significant concern for security agencies and detection was again error-prone. Finally, we found that a simple computer model outperformed our human participants.

Conclusions

Taken together, these results reinforce the idea that advanced computational techniques could prove more reliable than training people when fighting these types of morphing attacks. Our findings have important implications for security authorities worldwide.

Electronic supplementary material

The online version of this article (10.1186/s41235-019-0181-4) contains supplementary material, which is available to authorized users.Keywords: Morphing attack, Face morph, Fraud, Face matching, Morph detection

Significance

In order to minimize the use of fraudulent documents as forms of identification, anti-counterfeit measures such as watermarks are often included. With an increase in the detection of fraudulent IDs, security officers have recently seen a rise in the use of fraudulently obtained genuine (FOG) documents. As the name suggests, these involve deception during the application process in order to obtain a genuine document, equipped with all the necessary watermarks, and so on.

One method used by fraudsters is to submit a morph image (a 50/50 average of two people’s faces) for inclusion in an official document like a passport. If both people sufficiently resemble the morph, they could both use the resulting genuine passport for international travel. Recent research has begun to investigate whether people can detect morphs and has suggested that training might provide an effective way to increase performance.

Here, we reconsidered these findings with the use of higher-quality morphs, where every effort was made to produce images comparable with those we expect criminals to use. We found that on-screen morph detection was poor and training did not lead to improvements. When morphs were compared to faces during a live interaction, they were accepted at concerning levels and, again, detection was error-prone.

Importantly, we found that a simple computer model performed better than our human participants, suggesting that security agencies should focus on automated solutions rather than training people when fighting morphing attacks.

Background

The use of biometrics in identification is commonplace across a variety of contexts. For example, face photographs are featured in many forms of documentation internationally, including passports and driving licenses. Our reliance on the face as a means of identification is likely a result of our belief that we are face experts. However, in reality, we are only familiar face experts (Young & Burton, 2018).

Numerous studies have now shown that we are error-prone when making decisions based upon unfamiliar faces (e.g. Bruce, Henderson, Newman, & Burton, 2001; Burton, White, & McNeill, 2010; Jenkins, White, Van Montfort, & Burton, 2011; Kemp, Towell, & Pike, 1997). Further, and perhaps surprisingly, trained passport officers perform at similar levels to untrained university students (White, Kemp, Jenkins, Matheson, & Burton, 2014).

Errors with unfamiliar faces become especially problematic when dealing with various types of fraudulent identification. For instance, researchers in recent years have begun to investigate the issue of “face morphing attacks” (Ferrara, Franco, & Maltoni, 2014).

This term refers to the following three-step process to obtain a passport fraudulently. Person A (who has no criminal record) creates a morphed photo of himself and person B (whose prior record prevents him from international travel). First, person A submits this AB morph as his ID photograph with his passport application.

Second, the morph is compared with previous images of person A that are kept on file and the application is subsequently approved by the passport issuing officer on the grounds that the image sufficiently resembles him. Third, person A gives this FOG (Interpol, n.d.) passport to person B, who then proceeds to use it during travel as he also resembles the morph image sufficiently to pass through border control.

Problematically, since the document itself is genuine, typical anti-counterfeit measures (e.g. the use of security watermarks, inks, and fibers) are powerless to detect these types of fraud. Therefore, detection must rely upon comparing the morph with previously stored face photographs (at the point of issuance) or the “live” face (at the point of presentation for travel).

As digital image manipulation software becomes more advanced, the resulting morphs become more difficult to detect. One approach is to develop increasingly sophisticated computer methods for morph detection (e.g. Makrushin, Neubert, & Dittmann, 2017; Neubert, 2017; Raghavendra, Raja, Venkatesh, & Busch, 2017a2017b; Scherhag, Nautsch, et al., 2017; Scherhag, Raghavendra, et al., 2017; Seibold, Samek, Hilsmann, & Eisert, 20172018). For example, inconsistencies between the reflections visible in the eyes and skin could signal a morphed image (Seibold, Hilsmann, & Eisert, 2018). Such techniques may be incorporated into automated border control (ABC) systems in order to prevent the use of morph images.

In many situations, however, the decision to accept an ID image is left to a human operator. Indeed, even in face matching scenarios where algorithms are initially employed, human users are often presented with a “candidate list” and are required to make the final selection, potentially reducing the overall accuracy of the process (White, Dunn, Schmid, & Kemp, 2015). Although important across a variety of contexts, the question of whether people are able to detect morphs and/or whether they accept such images as genuine ID photographs has received little attention to date.

Ferrara, Franco, and Maltoni (2016) provided evidence that several computer algorithms performed with high error rates when tasked with detecting morph images. In addition, they found that human performance on their task was also poor, with morphs going undetected in most cases (see Makrushin et al., 2017, for similar findings).

In line with previous work on face matching with expert populations (White et al., 2014), their results also showed that professionals working in the field (border guards) were no better than university students and employees in detecting morphs.

Recently, two articles by Robertson and colleagues have specifically focused on human performance in the matching and detection of morphs. In the first, participants completed computer tasks in which they decided whether two face images onscreen depicted the same person or not (Robertson, Kramer, & Burton, 2017).

In seven trials, the two images were different photographs of the same face, and in another seven trials, the images were photographs of two different people. For the remaining 35 trials, a face photograph was paired with a morph containing differing amounts of that face and a second person. (When creating morphs, the researcher can specify the percentage weighting of each identity contained in the final image.)

The results demonstrated that 50/50 morphs (weighting both identities equally) were accepted as “matches” for the faces they were paired with on 68% of trials. After providing instructions regarding the nature of morphs, and with the additional response option of “morphed image,” participants subsequently accepted them as “matches” on only 21% of trials. Taken together, the authors suggested that erroneously accepting morphs as ID images was common, but these errors can be significantly reduced through instruction.

In the second article, the researchers investigated whether people were able to detect morph images and whether training could help with this task (Robertson et al., 2018). Participants were shown ten-image arrays containing a mixture of the morph images and exemplars (original, unmorphed faces) used in the previous article and were asked to identify which were the morphs. Performance was poor, with the 50/50 morphs resulting in average d’ sensitivities of 0.56 and 0.96 (for the two groups that took part: training versus none), suggesting that morphs were not readily detected.

However, providing information regarding the nature of the morphs, along with some tips to help with identifying them, resulted in a significant increase in sensitivity (to 2.69 and 2.32, respectively). An additional training protocol, in which feedback was provided via a two-alternative forced choice (2AFC) task, also led to a further benefit for the group that received it (the first mentioned in the values reported above). The authors concluded that people were poor at detecting morphs, but that training could significantly improve performance.

As mentioned earlier, with improvements in image manipulation techniques, and in combination with a criminal’s determination to avoid being caught, we should expect that real-world morphs will be made with a level of sophistication that renders them virtually undetectable to the human eye. Problematically for the two articles investigating human acceptance and detection of morphs (Robertson et al., 20172018), the images used were not representative of the level of cutting-edge methods that are likely to be applied by fraudsters. Although the initial face averaging was carried out using advanced morphing software (JPsychomorph; e.g. Benson & Perrett, 1993), there was no subsequent “touch up” stage in order to remove artefacts that are known to result from the averaging process (e.g. the presence of a secondary outline for the hair).

As Fig. 1 (top row) illustrates, the 50/50 morph (center) included obvious artefacts that can be easily removed using image-editing software. Indeed, these artefacts were highlighted to participants during the morph detection training phase of both previous studies: “look for a ‘ghost-like’ outline of another face; look for the outline of another person’s hair over the forehead” (Robertson et al., 2018, p. 4). In addition, by presenting faces that have been cropped to remove the neck and background, these images did not conform to real-world ID specifications and also highlighted to participants that all the images had been altered to some extent. For these reasons, we predict that the performance levels reported, along with the apparent training benefits, may only be of limited utility with regard to real-world behaviors when using more realistic images.

An external file that holds a picture, illustration, etc.
Object name is 41235_2019_181_Fig1_HTML.jpg
Fig. 1
Top: An example of the images used in previous work (adapted from Robertson et al., 2018). Bottom: An example of the images used in the current work (Experiment 31). The three faces depict two individuals (leftright) and a morph created using these images (center). The individuals pictured have given permission for their images to be reproduced here

In the current set of studies, we aim to address these issues by creating higher-quality morph images and investigating both human and computer detection of these images. It is important to determine whether people accept morphs, or can detect their use, when every effort is made to produce images that reflect real-world fraud.

For example, if training methods were implemented with the assumption that morph detection would be significantly improved, this might result in a false sense of security (literally) for passport control and issuing officers. Therefore, in this paper, we investigate human morph detection performance with and without training, reflecting a passport-issuing context (Experiments 1 and 2), whether people accept morphs as ID images in a “live” task, reflecting a border control scenario (Experiment 3), and, finally, whether computational modelling outperforms human detection, providing a more suitable alternative than training people (Experiment 4).


Source:
University of Yorkr
Media Contacts:
Shelley Hughes – University of York
Image Source:
The image is credited to University of York.

Original Research: Open access
“More human than human: a Turing test for photographed faces”. Rob Jenkins et al.
Cognitive Research: Principles and Implications doi:10.1186/s41235-019-0197-9.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Questo sito usa Akismet per ridurre lo spam. Scopri come i tuoi dati vengono elaborati.