The study identified two different groups of ‘liars’: those who activate their cheek muscles when they lie, and those who activate their eyebrows.
According to the researchers, the technology has great potential for detecting deception in real-life contexts, such as security and crime.
The study was conducted by a team of experts from Tel Aviv University headed by Prof. Yael Hanein of the Center of Nanoscience and Nanotechnology and School of Electrical Engineering, Iby and Aladar Fleischman Faculty of Engineering, and Prof. Dino Levy from the Coller School of Management.
The team included Dr. Anastasia Shuster, Dr. Lilach Inzelberg, Dr. Uri Ossmy and PhD candidate Liz Izakon.
The paper was published in the leading Journal Brain and Behavior.
The new study was founded upon a groundbreaking innovation from Prof. Hanein’s laboratory: stickers printed on soft surfaces containing electrodes that monitor and measure the activity of muscles and nerves.
The technology, already commercialized by X-trodes Ltd., has many applications, such as monitoring sleep at home and early diagnosis of neurological diseases. This time the researchers chose to explore its effectiveness in a different arena – lie detection.
Prof. Levy explains: “Many studies have shown that it’s almost impossible for us to tell when someone is lying to us. Even experts, such as police interrogators, do only a little better than the rest of us. Existing lie detectors are so unreliable that their results are not admissible as evidence in courts of law – because just about anyone can learn how to control their pulse and deceive the machine.
Consequently, there is a great need for a more accurate deception-identifying technology. Our study is based on the assumption that facial muscles contort when we lie, and that so far no electrodes have been sensitive enough to measure these contortions.”
The researchers attached the novel stickers with their special electrodes to two groups of facial muscles: the cheek muscles close to the lips, and the muscles over the eyebrows.
Participants were asked to sit in pairs facing one another, with one wearing headphones through which the words ‘line’ or ‘tree’ were transmitted. When the wearer heard ‘line’ but said ‘tree’ or vice versa he was obviously lying, and his partner’s task was to try and detect the lie. Then the two subjects switched roles.
As expected, participants were unable to detect their partners’ lies with any statistical significance. However, the electrical signals delivered by the electrodes attached to their face identified the lies at an unprecedented success rate of 73%.
Prof. Levy: “Since this was an initial study, the lie itself was very simple. Usually when we lie in real life, we tell a longer tale which includes both deceptive and truthful components. In our study we had the advantage of knowing what the participants heard through the headsets, and therefore also knowing when they were lying.
“Thus, using advanced machine learning techniques, we trained our program to identify lies based on EMG (electromyography) signals coming from the electrodes. Applying this method, we achieved an accuracy of 73% – not perfect, but much better than any existing technology. Another interesting discovery was that people lie through different facial muscles: some lie with their cheek muscles and others with their eyebrows.”
The researchers believe that their results can have dramatic implications in many spheres of our lives. In the future, the electrodes may become redundant, with video software trained to identify lies based on the actual movements of facial muscles. Prof. Levy predicts:
“In the bank, in police interrogations, at the airport, or in online job interviews, high-resolution cameras trained to identify movements of facial muscles will be able to tell truthful statements from lies. Right now, our team’s task is to complete the experimental stage, train our algorithms and do away with the electrodes. Once the technology has been perfected, we expect it to have numerous, highly diverse applications.”
Are there any observable behaviors or cues that can differentiate lying from truth-telling? Almost all researchers in the field of deception detection agree that there is no “Pinocchio’s nose” that can serve as an easy indicator of deception (DePaulo et al., 2003). Nevertheless, many researchers are still trying to find cues to deception (Levine, 2018; Denault et al., 2020).
The “leakage theory” asserts that high-stake lies (the rewards come with serious consequences or there can be severe punishments) can result in “leakage” of the deception into physiological changes or behaviors (especially microexpressions that last for 1/25 to 1/5 s; Ekman and Friesen, 1969; Ekman, 2003; Porter et al., 2011, 2012; Su and Levine, 2016; Matsumoto and Hwang, 2020).
Specifically, from the perspective of leakage theory (ten Brinke and Porter, 2012; Ten Brinke et al., 2012a,b), observable emotional facial expressions (microexpressions and macroexpressions) can, to some degree, determine who is lying and who is telling the truth (It is a probability problem (see Levine, 2018, 2019).
However, debate exists for this possibility. While some researchers (ten Brinke and Porter, 2012; Ten Brinke et al., 2012b; Matsumoto and Hwang, 2018) argued that emotional facial microexpression could be a cue to lies supported their claims by empirical evidence, Burgoon (2018) argued that detecting microexpressions is not the best way of catching liars. Furthermore, Vrij et al. (2019) even categorized microexpression into pseudoscience.
Even if it can be difficult, or even impossible for human beings to detect liars based on microexpressions, there do exist some behavioral cues that can, to some degree, differentiate lying from truth-telling (Vrij et al., 2000, 2006). Specially, pupil dilation and pitch are shown to be closely related to lying (Levine, 2018, 2019). Most of the deception researchers agree that lying involves processes or factors such as arousal and felt emotion (Zuckerman et al., 1981).
Therefore, emotional facial expressions can be valid behavioral cues to deception. Meanwhile, there are involuntary aspects of emotional expression. As noted by Darwin, some actions of facial muscles were the most difficult to be voluntarily controlled and were the hardest to be inhibited (the so-called Inhibition Hypothesis (see also Ekman, 2003). When a strongly felt genuine emotion is present, the related facial expressions cannot be suppressed (Baker et al., 2016).
Hurley and Frank (2011) provided evidence for Darwin’s hypothesis and found that deceivers could not control some particular elements of their facial expression, such as eyebrow movements. The liars would feel fear, duping delight, disgust, or appear tense while lying, and would attempt to suppress these emotions by neutralizing, masking, or simulating (Porter and Ten Brinke, 2008).
However, the liars could not inhibit them completely and the felt emotion would be “leaked” out in the form of microexpressions, especially under high-stake situations (Ekman and Friesen, 1969).
The claim of emotional leakage is supported by some recent research (Porter et al., 2011, 2012). When liars fake an unfelt emotional facial expression, or neutralize a felt emotion, at least one inconsistent expression would leak and appear transiently (Porter and Ten Brinke, 2008). ten Brinke and Porter (2012) showed that liars would present unsuccessful emotional masking and certain leaked facial expressions (e.g., “the presence of a smirk”).
In addition, they found that false remorse was associated with (involuntary and inconsistent) facial expressions of happiness and disgust (Ten Brinke et al., 2012a).
In addition to the support for emotional leakage, research also shows that leaked emotions can differentiate lies and truth-telling. Wright Whelan et al. (2014) considered a few cues that had successfully told liars and truth-tellers, including gaze aversion and head shakes. They combined the information from each cue to classify individual cases and achieved an accuracy rate as high as 78%.
Meanwhile, Wright Whelan et al. (2015) found non-police and police observers could reach an accuracy of 68 and 72%, respectively, when required to detect deception in high-stake, real-life situation. Matsumoto and Hwang (2018) found that facial expressions of negative emotions that occurred for <0.40 and 0.50 s could differentiate truth-tellers and liars. These studies all suggested that leaked facial expressions could help human beings detect liars successfully.
Besides human research, attempts have also been made to use machine learning to automatically detect deception by utilizing leaked emotions. A meta-analysis by Bond and DePaulo (2006) showed that human observers only achieved a slightly-better-than-chance accuracy when detecting liars. Compared to humans, some previous works with machine learning used the so-called reliable facial expressions (or involuntary facial expressions) to automatically detect deceit and achieved an accuracy above 70% (Slowe and Govindaraju, 2007; Zhang et al., 2007).
Given that the subtle differences of emotional facial expressions may not be detected by naïve human observers, computer vision may capture the different and subtle features between lying and truth-telling situations that cannot be perceived by a human being. Su and Levine (2016) found that emotional facial expressions (including microexpressions) could be effective cues for machine learning to detect high-stake lies, in which the accuracy was much higher than those reported in previous studies (e.g., Bond and DePaulo, 2006).
They found some Action Units (AU, the contraction or relaxation of one or more muscles (see Ekman and Friesen, 1976), such as AU1, AU2, AU4, AU12, AU15, and AU45 (blink), could be potential indicators for distinguishing liars from truth-tellers in high-stake situations. Bartlett et al. (2014) showed that computer vision could differentiate deceptive pain facial signals from genuine pain facial signals at 85% accuracy. Barathi (2016) developed a system that detected a liar based on facial microexpressions, body language, and speech analysis.
They found that the efficiency of the facial microexpression detector was 82%. Similarly, the automated deception detection system developed by Wu et al. (2018) showed that predictions of microexpressions could be used as features for deception detection, and the system obtained an area under the precision-recall curve (AUC) of 0.877 while using various classifiers.
The leakage theory of deception predicts that when lying, especially in high-stake situations, people would be afraid of their lies being detected and therefore result in fear emotions. These fear emotions could then leak and have the potential to be detected (Levine, 2019).
Meanwhile, it is presumed that if the fear associated with deception is leaked, the duration of the leaked fear would be shorter due to the nature of leaking and repressing (which would be presented as fleeting fear microexpressions). Some may argue that the fear emotions may also appear in truth-telling. It can be true.
Nevertheless, for a truth-teller, the fear of being wrongly treated as a liar would be less leaking, since a truth-teller does not need to try hard to repress the fear as liars do. As a result, the degree of repressing will be different between liars and truth-tellers. On average, the duration of fear (or AUs of fear) in lying situations would be shorter than that in truth-telling situations due to the harder repressing in the former ones.
Stakes may play a vital role while using an emotional facial expression as a cue to detect deception. Participants experience fewer emotions or less cognitive load in laboratory studies (Buckley, 2012). Almost all laboratory experiments are typical of low stakes and are not sufficiently motivating to trigger emotions giving rise to leakage (in the form of microexpressions).
Consequently, liars in laboratory experiments are not as nervous as in real-life high-stake situations, with no or little emotion leakage. As noted by Vrij (2004), some laboratory-based studies in which the stakes were manipulated showed that high-stake lies were easier to detect than low-stake ones. Frank and Ekman (1997) stated that “the presence of high stakes is central to liars feeling strong emotion when lying.”
Therefore, lying in high-stake situations would be more detectable by using emotional facial expression cues, and leaked emotional facial expressions would mostly occur in a high-stake context.
Hartwig and Bond (2014) had an opposite opinion and argued that even in high-stake situations, it could still be difficult to tell liars from truth-tellers.
They claimed that the context of the high stake would influence both liars and truth-tellers, as liars and truth-tellers might experience similar psychological processes. In other words, high-stake situations would cause inconsistent emotional expressions, like fear, not only in liars, but also in truth-tellers.
This claim is true to some degree (ten Brinke and Porter, 2012), but high stakes do not necessarily eliminate all the differences between liars and truth-tellers. Even though high-stake situations increase pressure on both liars and truth-tellers, it can be assumed that the degree of increment would be different, and liars would feel much higher pressure than truth-tellers under high stakes.
In addition, fabricating a lie requires liars to think more and therefore would cause a higher emotional arousal in them than in truth-tellers. Consequently, for liars, the frequency or probability of leaking an inconsistent emotional expression (say, fear) would be higher and thus easier to detect. In theory, the higher the stakes are, the more likely cues associated with deception (e.g., fear) are leaked, and the easier the liars could be identified using these cues.
Besides duration, other dynamic features (Ekman et al., 1981; Frank et al., 1993) could also vary in genuine and fake facial expressions, such as symmetry. Ekman et al. (1981) manually analyzed the facial asymmetry by using the Facial Action Coding System (FACS) and showed that genuine smiles have more symmetry when compared to a deliberate smile.
Similarly, the leaked emotional facial expressions of fear while lying and the less leaked ones when telling a truth may also show different degrees of symmetry. However, the approach Ekman et al. (1981) used could be time-consuming and subjective.
Thus, in the current study, we proposed a method that used coherence (a measure of the correlation between two signals/variables) to measure the asymmetry. The more symmetrical the facial movements of the left and right face, the higher the coefficient of correlation between them. Consequently, the value of coherence (ranges from 0 to 1) can be a measurement of asymmetry or symmetry.
Based on the leakage theory and previous evidence, we hypothesize that (1) emotional facial expressions of fear (fear of being caught) can differentiate lying from truth-telling in high-stake situations; (2) the duration of AUs of fear in lying would be shorter than that in truth-telling; (3) the symmetry of facial movements will be different, as facial movements in lying situations will be more asymmetrical (due to the nature of repressing and leaking).
reference link : https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8217652/
Original Research: Open access.
“Lie to my face: An electromyography approach to the study of deceptive behavior” by Anastasia Shuster et al. Brain and Behavior