Embodied cognition proposes that people understand the words for objects through how they interact with them, so the researchers devised a test to observe semantic processing of words when the ways that the participants could interact with objects were limited.
Words are expressed in relation to other words; a “cup,” for example, can be a “container, made of glass, used for drinking.” However, you can only use a cup if you understand that to drink from a cup of water, you hold it in your hand and bring it to your mouth, or that if you drop the cup, it will smash on the floor.
Without understanding this, it would be difficult to create a robot that can handle a real cup. In artificial intelligence research, these issues are known as symbol grounding problems, which map symbols onto the real world.
To test embodied cognition, the researchers conducted experiments to see how the participants’ brains responded to words that describe objects that can be manipulated by hand, when the participants’ hands could move freely compared to when they were restrained.
“It was very difficult to establish a method for measuring and analyzing brain activity. The first author, Ms. Sae Onishi, worked persistently to come up with a task, in a way that we were able to measure brain activity with sufficient accuracy,” Professor Makioka explained.
In the experiment, two words such as “cup” and “broom” were presented to participants on a screen. They were asked to compare the relative sizes of the objects those words represented and to verbally answer which object was larger—in this case, “broom.”

Comparisons were made between the words, describing two types of objects, hand-manipulable objects, such as “cup” or “broom” and nonmanipulable objects, such as “building” or “lamppost,” to observe how each type was processed.
During the tests, the participants placed their hands on a desk, where they were either free or restrained by a transparent acrylic plate. When the two words were presented on the screen, to answer which one represented a larger object, the participants needed to think of both objects and compare their sizes, forcing them to process each word’s meaning.
Brain activity was measured with functional near-infrared spectroscopy (fNIRS), which has the advantage of taking measurements without imposing further physical constraints.
The measurements focused on the interparietal sulcus and the inferior parietal lobule (supramarginal gyrus and angular gyrus) of the left brain, which are responsible for semantic processing related to tools.
The speed of the verbal response was measured to determine how quickly the participant answered after the words appeared on the screen.
The results showed that the activity of the left brain in response to hand-manipulable objects was significantly reduced by hand restraints. Verbal responses were also affected by hand constraints.
These results indicate that constraining hand movement affects the processing of object-meaning, which supports the idea of embodied cognition. These results suggest that the idea of embodied cognition could also be effective for artificial intelligence to learn the meaning of objects.
- Embodied Cognition Framework
Embodied cognition defines cognition as an inter- action between the mind and the body’s systems. People generate mental representations through physical simu- lations, situated action, and bodily states (Barsalou, 2008, 2010). Grounded cognition and learning can occur at various levels of mental processing, taking into account abstract internal representations (Barsalou, 2008, 2010; Wilson, 2002).
Simulation refers to the process in which the brain captures information across the body’s modalities (e.g., sight, sound) and integrates all the representations to be stored in memory. When a person thinks about an experience or an idea, the brain reenacts all the perceptual motor and introspective states that were stored during the time the body and the mind interacted with the physical world (Barsalou, 2008, 2010).
For example, an experience happens, such as petting a cat, and the brain captures it into a multimodal representation; how the cat looks and feels, the action of petting, and introspection of enjoyment or comfort. When information is remem- bered (i.e., petting a cat), the body simulates those same systems in the brain as if the body were enacting that experience.
Situated action, how that body interacts with the environment in specific ways, also shapes thinking. For example, how human bodies are situated in the environ- ment (e.g., verticality) may influence the type of meta- phors people create (e.g., happiness as ‘‘feeling up’’ and sad as ‘‘feeling down’’; Anderson, 2003; Lakoff, 1993; Lakoff & Johnson, 1980). In addition, body position can contribute to thinking, suggesting that humans use their bodily states to interpret experiences. For instance, unconsciously smiling or frowning can influence how humorous a cartoon seems (Strack, Martin, & Stepper, 1988), or how holding a slumped posture can elicit feel- ings of helplessness (Riskind & Gotay, 1982).
- Coordination in the Body Impacts Perception
Extending the EC framework, the body specificity hypothesis indicates that each person’s body interacts with the world in a specific unique way. If mental repre- sentations are generated through the body, ‘‘people with different bodily characteristics, who interact with the physical environment in systematically different ways, should form correspondingly different mental representations’’ (Casasanto, 2009, p. 351).
More specifically, handedness can influence how people feel about things they encounter on their left and right sides of space as being ‘‘good’’ or ‘‘bad.’’ When given the choice between two similar items, such as job appli- cants, consumer products, or cartoon characters, right- handers tend to prefer the one on the right, and left- handers prefer the one on the left (Casasanto, 2009, 2011). Rather than being hardwired into the brain by the same mechanisms that control side dominance, Casasanto argues that this preference is driven by a person’s sense of fluency with one side of the physical body.
Casasanto and Chrysikou (2011) demonstrated in two studies that this valence preference for one side in space is malleable and that it follows the direction of how flu- ently the body moves. The first study involved stroke patients who were initially right-handed but lost the abil- ity to control the right side of their body after experienc- ing the stroke (making their left side more fluent).
Results showed that this group of initially right-handed individuals attributed items on the left as positive and on the right as negative, just like naturally left-handed people. The second study examined healthy right- handed volunteers. The study manipulated the coordina- tion of their hands during a dexterity task by placing a bulky ski glove onto either their left or right hand. Par- ticipants who wore the ski glove on their right hand (ostensibly making them left-handed) viewed the left side as good and the right side as bad at a significantly higher rate than participants who wore the ski glove on their left hand during the task (which preserved their right-hand fluency). These results demonstrate that tem- porarily changing a person’s body fluency changes the way visual space (left or right) is designated as positive or negative. More broadly, these findings suggest that changing the way in which people physically interact with their environment may have effects on their cognition.
reference link :Presence, Vol. 25, No. 3, Summer 2016, 222–233 doi:10.1162/PRES_a_00263
Original Research: Open access.
“Hand constraint reduces brain activity and affects the speed of verbal responses on semantic tasks” by Sae Onishi et al. Scientific Reports