Trash talking has a long and colorful history of flustering game opponents, and now researchers at Carnegie Mellon University have demonstrated that discouraging words can be perturbing even when uttered by a robot.
The trash talk in the study was decidedly mild, with utterances such as “I have to say you are a terrible player,” and “Over the course of the game your playing has become confused.”
Even so, people who played a game with the robot — a commercially available humanoid robot known as Pepper — performed worse when the robot discouraged them and better when the robot encouraged them.
Lead author Aaron M. Roth said some of the 40 study participants were technically sophisticated and fully understood that a machine was the source of their discomfort.
“One participant said, ‘I don’t like what the robot is saying, but that’s the way it was programmed so I can’t blame it,’” said Roth, who conducted the study while he was a master’s student in the CMU Robotics Institute.
But the researchers found that, overall, human performance ebbed regardless of technical sophistication.
The study, presented last month at the IEEE International Conference on Robot & Human Interactive Communication (RO-MAN) in New Delhi, India, is a departure from typical human-robot interaction studies, which tend to focus on how humans and robots can best work together.
“This is one of the first studies of human-robot interaction in an environment where they are not cooperating,” said co-author Fei Fang, an assistant professor in the Institute for Software Research.
It has enormous implications for a world where the number of robots and internet of things (IoT) devices with artificial intelligence capabilities is expected to grow exponentially.
“We can expect home assistants to be cooperative,” she said, “but in situations such as online shopping, they may not have the same goals as we do.”
The study was an outgrowth of a student project in AI Methods for Social Good, a course that Fang teaches.
The students wanted to explore the uses of game theory and bounded rationality in the context of robots, so they designed a study in which humans would compete against a robot in a game called “Guards and Treasures.”
A so-called Stackelberg game, researchers use it to study rationality. This is a typical game used to study defender-attacker interaction in research on security games, an area in which Fang has done extensive work.
Each participant played the game 35 times with the robot, while either soaking in encouraging words from the robot or getting their ears singed with dismissive remarks.
Although the human players’ rationality improved as the number of games played increased, those who were criticized by the robot didn’t score as well as those who were praised.

Researchers at Carnegie Mellon University have demonstrated that people who play a game with a robot suffer in performance when the robot criticizes them. The image is credited to Carnegie Mellon University.
It’s well established that an individual’s performance is affected by what other people say, but the study shows that humans also respond to what machines say, said Afsaneh Doryab, a systems scientist at CMU’s Human-Computer Interaction Institute (HCII) during the study and now an assistant professor in Engineering Systems and Environment at the University of Virginia.
This machine’s ability to prompt responses could have implications for automated learning, mental health treatment and even the use of robots as companions, she said.
Future work might focus on nonverbal expression between robot and humans, said Roth, now a Ph.D. student at the University of Maryland. Fang suggests that more needs to be learned about how different types of machines — say, a humanoid robot as compared to a computer box — might invoke different responses in humans.
In addition to Roth, Fang and Doryab, the research team included Manuela Veloso, professor of computer science; Samantha Reig, a Ph.D. student in the HCII; Umang Bhatt, who recently completed a joint bachelor’s-master’s degree program in electrical and computer engineering; Jonathan Shulgach, a master’s student in biomedical engineering; and Tamara Amin, who recently finished her master’s degree in civil and environmental engineering.
Funding: The National Science Foundation provided some support for this work.
Compassion-based interventions have been shown to be effective in increasing empathy and compassion (Brito et al., 2018), and reducing stress, anxiety, and depression (Kirby et al., 2017).
They have been used in clinical settings, such as oncology (Gonzalez-Hernandez et al., 2018) or personality disorders (Feliu–Soler et al., 2017).
Compassion refers to “the feeling that arises in witnessing another’s suffering and that motivates a subsequent desire to help” (Goetz et al., 2010, p. 351). When this feeling is focused on oneself, it is called self-compassion, defined as individuals’ ability to respond to their own suffering with warmth and the desire to alleviate their own pain (Neff and Dahm, 2015).
Compassion-based interventions use different techniques and meditations to achieve the objective of increasing self-compassion and compassion skills, such as focused attention meditations to calm the mind and, mainly, the family of constructive meditations (Dahl et al., 2015).
In this family of meditation practices, the meditator purposefully strengthens his/her natural capacity for loving kindness and compassion by intentionally generating compassionate thoughts, feelings, and motivations toward different objects, including him/herself (Brito et al., 2018).
In order to induce and train these positive mental states, the family of constructive meditations requires the use of mental imagery abilities. Surprisingly, the impact of these imagery skills on CBIs has not been studied, even though one of the major difficulties that participants report during the training is related to these imagery abilities.
According to Pearson et al. (2013), there are four different mental imagery skills related to different processes that could be interacting with these types of meditation:
(a) creation,
(b) sustainment,
(c) inspection, and
(d) transformation of mental images.
In creation, the meditators have to select the type of images or elements that will be used in the meditation.
In the second process, the sustainment of the mental image, research shows that after 250 ms (the time necessary for eye movement) (Kosslyn, 1994), the image starts to decay.
Thus, participants usually have to deal with the frustration of not being able to sustain the image long enough, which could interact with their positive emotional state.
The third aspect, inspection, refers to the interpretation of an object-based characteristic or spatial property of this generated image.
For example, the lack of definition (blurred) and vividness of the mental image is also a significant factor. Finally, transformation includes the capacity for rotation and restructuring.
Although the effects of compassion and self-compassion training are well known, the factors that predict why the training works for some people and not for others have been understudied.
In this regard, the absence of adequate training in the ability to create, sustain, inspect, or transform mental images may impede the expected positive effects of compassion training. This lack of ability can lead people to struggle with steps prior to the compassion itself, and this experience can discourage people from continuing the necessary training to develop the compassion skills, like self-care or positive qualities (compassion, equanimity, joy, or loving kindness).
Virtual reality can be a useful tool to overcome this limitation because it can help to construct, sustain, inspect, and transform mental images. VR can be considered an advanced imagery system and an experiential form of imagery that is as effective as reality in inducing cognitive, emotional, and behavioral responses (Day et al., 2004).
VR has been used to train compassion and self-compassion. For instance, Slater’s group studied how the use of virtual bodies can promote compassion and self-compassion by analyzing the effects of self-identification with virtual bodies within immersive VR on increasing self-compassion in individuals with high self-criticism and depression (Falconer et al., 2014) showing how could be effective in reducing depression severity and self-criticism.
The same group investigated how an embodied black avatar decreases racial prejudice and changes negative interpersonal attitudes (Peck et al., 2013).
Bailenson’s group also studied how an embodied avatar in VR can make people more altruist. For example, participants embodied a Superman avatar, and the results showed that they felt more helpful after the experiment (Rosenberg et al., 2013).
All these studies use embodied VR systems, which is a cognitive science approach that emphasizes, among other aspects, the subjective experience of using and “having” a body.
This paradigm has been used to generate Full Body Illusions (Ehrsson, 2007) and body swapping experiments, which have become an increasingly popular method for investigating how illusory ownership of an entire fake or virtual body affects various aspects of bodily perception and experience.
Thus, VR allows individuals to be present not only in the environment, but also in someone else body. VR allows the person to be “inside” another body (e.g., another person or animal), creating a body swap experience that makes it possible to study the embodiment processes, as well as emotional states such as empathy or compassion (Peck et al., 2013; Rosenberg et al., 2013; Falconer et al., 2014).
The Machine to be another is a low-budget body swapping system designed to address the relationship between identity and empathy through the use of multi-sensory stimulation (visual, cutaneous, proprioceptive, and auditory) to induce a body swap illusion (Oliveira et al., 2016).
It allows the user to have an immersive experience of seeing him/herself in the body of another person –a performer– (Bertrand et al., 2018). The TMTBA is connected to the head-mounted display (an Oculus Rift), and the performer’s first-person perspective is captured by a camera controlled by the user’s head movements, showing the torso, legs, and arms of the performer’s body. The user, through the Oculus Rift, sees the image captured by the camera, creating the illusion of being another person, and seeing him/herself from a third-person point of view.
As mentioned above, this system could overcome some limitations of imagery skills, and it can be combined with self-compassion meditations to generate a powerful emotion response of self-compassion and, therefore, increase adherence to the meditation practice and its effects. Thus, the main objectives of this study are to analyze the effects of a self-compassion meditation supported by the TMTBA-VR system, compared to usual practice (only audio) and analyze whether the imagery skills would moderate the effect of the condition on the adherence to meditation practice after 2 weeks.
The main hypothesis are divided in two groups, on one hand it is expected effects before and after the meditation supported by TMTBA-VR, showing effectiveness (1) to increase positive qualities toward self/others, decrease negative qualities toward self/others, and increase awareness and attention to the present experience immediately after a compassion practice; and, on the other hand, it is expected an effect after 2 weeks of practice, showing that the participants who received the TMTBA-VR condition will (2) increase adherence to mediation practice, the frequency of self-care behaviors, and positive affect, and decrease negative affect, after 2 weeks compared to usual practice. Furthermore, it is expected that imagery skills will moderate these results.
Source:
Carnegie Mellon University
Media Contacts:
Byron Spice – Carnegie Mellon University
Image Source:
The image is credited to Carnegie Mellon University.
Original Research: The findings will be presented at the IEEE International Conference on Robot & Human Interactive Communication.