Kyoko Hine, Assistant Professor at the Department of Computer Science and Engineering, Toyohashi University of Technology and a research team at Tokyo Denki University have found that virtual reality (VR) may interfere with visual memory.
In recent years, there has been a high expectation that VR will be used effectively not only in multimedia and entertainment but also in educational settings.
However, in order to benefit society, IT needs to take human characteristics into consideration.
The nature of VR can become known through scientific verification based on experiments like the work undertaken in this research.
In recent years, head-mounted displays (HMD) have become commonplace, and experiencing VR has become commonplace.
VR moves the displayed images to match the movement of the user, creating a high sense of realism and enhanced immersion.
For this reason, hopes have been raised that VR can be used as a new tool for efficient learning because it attracts children’s attention even in educational settings. However, there has been no scientific verification of the effects of VR visual memory.
Therefore, the research team conducted an experiment using HMDs and examined the effects of VR on memory.
In the experiment, the participants visited a museum virtually and looked at paintings. After that, a memory test was conducted about the paintings.
With regard to the VR experience, the research team set up conditions such that one group viewed images linked to their movements on an HMD (active VR) and a second group watched another person’s VR video on a display (passive VR).
In other words, underactive VR, the participants could look around at the surroundings themselves, but under passive VR, the participants could not look around.
Comparing the results of the memory test for these two groups, the results were worse for the active VR group.
From this, it became clear for the first time in the world that VR may interfere with visual memory due to the way it moves images in conjunction with user movement.
The reason may be that the enhanced sense of realism and immersion created by the ability to look around freely, that is characteristic of VR, tires the brain and consequently prevents the formation of visual memory.
While there are high hopes for VR technology as an educational tool that attracts users, and children, in particular, it is important to create teaching materials that take into account these characteristics of VR.
Society will require the development of IT that considers human characteristics more than ever in the coming years.
VR scene. Image is credited to Toyohashi University of Technology.
The museum was reproduced and photographed in VR by the research team. It was a challenge to make the preparations to secure a quiet and appropriate space for viewing the paintings.
In addition, the team took 10 minutes of video. The videos had to be re-shot many times because it was necessary to secure an appropriate viewing angle to view and remember the paintings for the whole 10 minutes.
As a result of these efforts, the team was able to produce good quality VR images for the experiment.
The team wants to find out why visual memory is hindered when the participants can look around freely in VR.
Moving forward, the team hopes to offer suggestions on how to use VR as a better learning tool by removing the causes of this phenomenon.
A defining feature of episodic memory (EM) is the capacity to provide information about the content of our conscious personal experiences of “when” and “where” events occurred as well as “what” happened [1,2].
Previous studies defined EM as the recall of contextually rich and personally relevant past events that are associated with specific sensory-perceptual and cognitive-emotional details [3–10]. EM has been distinguished from semantic memory, the latter being associated with general self-knowledge and the recall of personal facts that are independent of re-experiencing specific past events [11–17].
In a series of seminal papers, Endel Tulving highlighted the subjective dimension of EM associated with the re-experiencing of specific past events by pointing out the importance of the sense of self and introducing his influential notion of autonoetic consciousness.
Tulving distinguished autonoetic consciousness from noetic consciousness, linking the latter to semantic memory and to knowing about (rather than re-experiencing) specific past events.
Although, several other cognitive domains have been proposed to contribute to the sense of self (i.e. language, mental imagery, facial self-recognition [32–35]), recent research has highlighted the importance of non-cognitive multisensory and sensorimotor contributions to the sense of self.
This novel theoretical and experimental approach is based on behavioral [36,37], neuroimaging [38–40] and clinical data [39,41] and involves the processing and integration of different bodily stimuli to the sense of self: bodily self-consciousness (BSC) (for review see [42,43]).
This work was based on clinical observations in neurological patients with so-called out-of-body experiences characterized by changes in the sense of self, in particular of the experienced self-location and perspective from an embodied first-person perspective to a third-person perspective [39,41] and has been able to induce milder, but comparable, states in healthy participants using virtual reality (VR) technology to provide multisensory stimulation [36,39,47].
Given the link of BSC with subjective experience and previous claims that subjective re-experiencing of specific past events is a fundamental component of EM [2,18], we argue that multisensory bodily processing may not only be of relevance for BSC, but also for consciousness concerning past events.
Recent findings have shown that BSC impacts several perceptual and cognitive functions such as tactile perception [48,49], pain perception [50,51], visual perception [52–54], as well as egocentric cognitive processes .
Concerning EM, St. Jacques et al.  used a novel camera technology to examine the differences in self-projection (i.e. the capacity to re-experience the personal past and to mentally infer another person’s perspective) and found that the ventral–dorsal subregions of the anterior midline are functionally dissociable and may differentially contribute to self-projection when comparing self versus other.
Bergouignan et al.  reported that recall of items and hippocampal activity during the encoding of episodic events is modulated by the visual perspective from where the event was viewed during encoding and St. Jacques et al.  showed that first- versus third-person perspective during retrieval modulated recall of autobiographical events and associated this with medial and lateral parietal activations.
Together, these findings revealed that retrieval-induced forgetting is enhanced by third-person, but not first-person perspective. Therefore, these studies suggest that encoding of EM requires the natural co-perception of one’s body and the extrapersonal world, which is perceived from the first-person perspective. As such, we here predicted that bodily multisensory processing, that has been described to modulate BSC, would interfere with EM processes.
Traditionally, behavioral and neuroimaging EM studies rely on questionnaires, verbal reports, interviews, or mental imagery and predominantly investigated memory retrieval by using a variety of stimuli and procedures such as cue words and pictures [58–63].
For example, important research relied on interviews with the participants [61,64] on personalized lists of significant life events of participants [9,30,65–67], and employed different procedures asking participants to re-experience particular life episodes [59,62,63,68,69]. This differs from research investigating verbal memory through encoding and recall of word lists [70–73] or testing spatial memory with figures, spatial paths, or other visuospatial materials [74–76] (for which it is much easier to fully control encoding and retrieval).
Beyond the use of controlled images, short video clips or words in EM studies [4,77], an important line of neuroscientific EM work has used novel approaches employing stimuli from real world encounters, outside the laboratory.
For example, Cabeza et al.  created a campus tour paradigm and tested EM retrieval by using digital photos taken from the tour. Similarly, Schacter et al.  introduced a museum tour paradigm, which was used to study the reactivation-induced updating in memory for events experienced during the tour.
Thus, during encoding, participants went on an audio-guided museum tour, while wearing a camera which automatically took photos some of which were selected to test EM (see also ). Vogel and Schwabe  also used pictures, which were taken automatically and continuously by a camera during a 2-hour walk through a zoo for testing EM, comparing events represented by pictures from their own zoo tour with those of others.
Several EM research groups have relied on advances in video technology and VR during encoding and retrieval of information (i.e. spatial navigation [80,81]; social interactions [82,83]). Participants were seated in front of a computer screen showing a virtual environment and asked to navigate in such environments using a joystick (encoding) and later asked to recall selected items from the environment (retrieval).
These computer-based VR studies suggest that both interactions with the environment during encoding or retrieval influence memory performance. Compared to passive participation, several VR studies showed better learning performances across free recall trials and recognition tasks [80,84–86].
Plancher et al.  suggested that interactions with the naturalistic environment created with VR enhanced spatial memory. However, despite these important achievements, these virtual environments were mostly using non-immersive VR systems, did not employ real life like virtual scenes, and did not use VR technology that allows integrating the participants’ body (and hence multisensory bodily stimulation) for the tested virtual life episodes.
In the present experiments, we took advantage of a recently developed immersive VR system, which allows us to preserve the perceptual richness of life episodes, to fully control the experimental stimuli during encoding and retrieval, and to integrate and manipulate multisensory information of our participant’s body in an online fashion. Unlike in traditional, laboratory-based studies, here we claim that particularly the presence of one’s own physical body plays a crucial role in our experimental testing of EMs.
Our paradigm approaches 3D real life episodes, but in a VR setting for which all items of the scene during encoding and retrieval are fully controlled. This VR technology allows us to examine the relation between “the bodily-self” and “the episodic-self”, particularly the subjective experience of mentally travelling back in time.
The present experiments had one major technological and one major scientific goal:
(1) develop and test real life-like memory in the laboratory with virtual episodes using immersive VR and
(2) investigate whether multisensory bodily stimulations that have been shown to impact BSC, perception, and egocentric cognition modulates EM.
In the first experiment, we tested our immersive VR system and sought to address some of the experimental limitations of earlier EM studies, which either had limited control of actual autobiographical stimuli and events during encoding and only examined the stage of EM retrieval [5,60,67,88] or controlled EM encoding, but without the immersion into the original scenes during EM retrieval [9,57,65].
The main aim of our first experiment was to validate our novel VR paradigm in order to study EM in a more naturalistic setting. We further tested EM performance and confidence for immersive three-dimensional (3D) VR scenes at two different time points and for different number of items (that changed between both sessions), we predicted memory decreases depending on delay and on the number of items changed.
Numerous behavioral cognitive studies have observed dissociations between memory accuracy and memory confidence [89–95]. For example, Talarico & Rubin  showed that the objective accuracy for events of September 11th, 2001 did not differ from accuracy in every-day events. However, the subjective feeling of remembering was enhanced for the highly arousing EMs compared to everyday-like EMs. Likewise, Sharot & Yonelinas  found that emotional photographs were remembered with a greater subjective sense of recollection, yet the objective memory performance between emotional and neutral photos did not differ.
Similar to the prior investigations examining the effect of emotional memories on subjective confidence, we thus sought to investigate the impact of multisensory bodily cues on subjective confidence.
In the second experiment, we investigated the main scientific hypothesis of the present experiments and tested the potential link between multisensory own body signals, that are fundamental for BSC and EM.
Vision and proprioception are sensory signals that are highly relevant for the brain in order to rapidly and continuously update the instantaneous representation of the body in space. Perceiving one’s body as part of a visual scene (for example a hand lying on a table) relies on i. visual, ii. proprioceptive, and iii. tactile cues.
These signals are processed initially in different brain regions and subsequently integrated in multisensory brain regions [42,43]. Such multisensory body-related signals are not just relevant for hand perception, but also for BSC, including hand ownership (i.e. the feeling that this hand is mine), self-identification with the body, self-location (i.e. experiencing the self as being located in space), and the first-person perspective (i.e. experiencing the world from a spatial origin with a direction) [42,43,96].
We thus examined whether the presence of online and congruent multisensory cues from the participant’s body (i.e. the presence of one’s own physical body from the first-person viewpoint) impacts memory performance and confidence in the present VR paradigm, compared to an experimental condition where such online first-person bodily cues are absent.
Based on BSC work that has shown that view of the body enhances perceptual and cognitive tasks [57,58] and based on the fact that during memory encoding the body is in most instances co-perceived with the other elements of the scene, we predicted that the presence of a body during encoding would enhance memory performance.
Finally, we performed a third (control) experiment in order to test whether the effect of multisensory bodily stimulation that we observed in the second experiment is specific to multisensory bodily cues.
Original Research: Open access
“Active View and Passive View in Virtual Reality Have Different Impacts on Memory and Impression”. Hine K. & Tasaki H.
Frontiers in Psychology doi:10.3389/fpsyg.2019.02416.