Micro-eye movements can be used as an indication of our ability to anticipate relevant information in the environment

0
499

Tiny eye movements can be used as an index of humans’ ability to anticipate relevant information in the environment independent of the information’s sensory modality, a team of scientists has found.

The work reveals a connection between eye movements and the sense of touch.

“The fact that tiny eye movements can hinder our ability to discriminate tactile stimuli, and that the suppression of those eye movements before an anticipated tactile stimulus can enhance that same ability, may reflect that common brain areas, as well as common neural and cognitive resources, underlie both eye movements and the processing of tactile stimuli,” explains Marisa Carrasco, a professor of psychology and neural science at New York University and the senior author of the paper, which appears in the latest issue of the journal Nature Communications.

“This connection between the eyes and touch reveals a surprising link across perception, cognition, and action,” adds Stephanie Badde, an NYU post-doctoral researcher and first author of the paper.

The study asked human participants to distinguish between two kinds of vibrations (“fast” – high frequency vs. “slow” – low frequency) that were produced by a device connected to their finger.

The researchers then tracked even the tiniest of their involuntary eye movements, known as micro-saccades. These small, rapid eye-movements are known to occur even when we try to fixate our gaze on one spot.

Here, participants were instructed to focus their vision on a fixation spot on a computer screen. A cue–a tap elicited by the device at their finger–would announce the next imminent vibration.

What the participants did not know is that the time interval between that cue and the tactile vibration was a central part of the experimental design.

The manipulation of that interval allowed participants in some blocks to predict with more accuracy precisely when the vibration would happen. Notably, when they had that precise information, the researchers could see not only how the participants’ microsaccade rates would decrease just before the vibration stimulus, but also how their ability to distinguish between fast and slow vibrations was enhanced by the suppression of micro-saccades.

The paper’s other co-authors were Caroline Myers, an NYU graduate student, and Shlomit Yuval-Greenberg, an associate professor at Tel-Aviv University.


Visuomotor adaptation is commonly used to study human motor learning in health (Krakauer et al., 1999; Ghilardi et al., 2000; Taylor and Ivry, 2013; Galea et al., 2015; Haar et al., 2015b) and disease (Rabe et al., 2009; Wong et al., 2019).

In a visuomotor rotation task, the visual representation of hand position is manipulated such that subjects must learn a new mapping of motor commands to apparent outcomes.

Recent studies dissociated explicit and implicit processes in the visuomotor adaptation (Mazzoni and Krakauer, 2006; Hegele and Heuer, 2010; Taylor and Ivry, 2011; Taylor et al., 2014; Werner et al., 2015), where the sum of the two gives the total adaptation.

One measure of implicit learning is to ask subjects to reach straight for the target without perturbation (or without any visual feedback) and to measure the difference between the direction of reach and the target. We call this exclusion because the subject is being asked to “exclude” their explicit knowledge from their behavior.

When measured after adaptation, this is called aftereffect. During adaptation, it is sometimes called a “catch trial” (Werner et al., 2015). Exclusion cannot be measured every trial since it presumes surrounding adaptation trials.

To assess implicit and explicit learning throughout the adaptation process, Taylor et al. (2014) suggested simply asking subjects to report aiming direction before each movement by reporting which of the numbers displayed in a circle on the screen was in the direction the subject intended to move.

Reporting has been a very productive experimental approach. However, the protocol has known limitations [e.g., reporting increases the length and variability of reaction time (RT) since subjects can start moving only after reporting].

One alternative is to measure explicit learning using eye movements: perhaps eye movements can provide an objective measure of subjects’ intentions without needing special trials or direct questioning. During unperturbed reaching movements, the eyes were found to provide an unbiased estimate of hand position (Ariff et al., 2002).

During visuomotor rotation, there is an increase in correlation between gaze and hand directions in early practice, which gradually decreased thereafter (Rand and Rentsch, 2016). Indeed, a recent study found that gaze patterns during visuomotor adaptation were linked to explicit learning (de Brouwer et al., 2018).

Interestingly, de Brouwer et al. (2018) noticed subjects whose eye movements did not reflect adaptation while their aftereffects did indicate some explicit learning. This raises the possibility that some forms of explicit adaptation are captured by the eye movements, while others are not.

This possibility is in line with recent suggestions of multiple explicit strategies in human motor learning, even in a redundant visuomotor rotation task (McDougle and Taylor, 2019). McDougle and Taylor (2019) showed that subjects in different conditions may use either discrete response caching or parametric mental rotation as two different explicit strategies.

Their results further suggest that RT can be used to dissociate these explicit strategies: mental rotation is a time-consuming computation, and caching is a fast automatic process that does not require a long RT (Haith and Krakauer, 2018). Here, we explore the explicit components captured by eye movements and their link to the explicit strategies captured by RT.

In the first experiment of the current study, we measured subjects’ eye movements during visuomotor rotation with verbal reporting and without. As in de Brouwer et al. (2018), our results demonstrate that, in verbal reporting, eye fixations before movement onset accurately predict the reported aiming direction.

Without reporting, eye fixation before movement onset correlates well with explicit learning measured by aftereffect.

However, it does not account for the full explicit knowledge revealed by exclusion. This suggests that only a component of explicit learning is being captured by eye movements when there is no verbal report.

In a second experiment, we explored the time course of the discrepancy between eye movements and exclusion by introducing exclusion (catch) trials, during adaptation, in addition to testing for an aftereffect at the end of adaptation.

For some subjects, measures of explicit learning from eye movements matched those from exclusion. For other subjects, exclusion revealed more explicit knowledge than that found in the eye movements. The first group was divided into the following two subgroups: those using primarily an explicit strategy and those with hardly any contribution from an explicit strategy.

The second group, where exclusion showed more explicit knowledge than did the eye movements, showed subjects with the full range of combinations of explicit and implicit learning. Further analysis of RT seems to indicate that the explicit knowledge reflected in the eye movements may be the same mental rotation component identified by McDougle and Taylor (2019).

Discussion
In this study, we explored the extent to which explicit components of visuomotor adaptation are reflected in eye movements. We did this by comparing eye movements to two accepted measures of explicit learning: verbal report and the exclusion test.

Our experiments showed that eye movements have a stable pattern: after target appearance, the eyes saccade from the origin to the target, and then, before movement onset, the eyes saccade again in the direction toward which the subject will aim. We believe that these eye movements provide a measure of explicit adaptation (we called it explicit eye); however, this measure only reflects part of the explicit adaptation.

Our first experiment showed that when subjects report their intended direction, explicit eye and the other two measures (verbal report and exclusion) all matched. In contrast, when subjects did not report, explicit eye only reflected part of the explicit adaptation, as reflected in the exclusion. However, the two were correlated, and this suggests that explicit eye might be reflecting select components of the explicit exclusion.

In our second experiment, we tried to explore more fully the time course of the separation of explicit eye from explicit shown by exclusion. We found that the two diverge early in adaptation. In analyzing the data of the second experiment, we found three groups of subjects.

The first group adapted fully to the rotation and had eye movements consistent with performance in exclusion trials (Match-High group); the second group also adapted fully but had less explicit eye than would be expected from exclusion trials (No-Match group); the third group only adapted partially and had eye movements consistent with lack of explicit adaptation in the exclusion trials (Match-Low group).

The learning curves of this last group were similar to those reported in paradigms where subjects had only implicit adaptation (Morehead et al., 2017; Kim et al., 2018, 2019).

Smith et al. (2006) proposed a two-state model of motor adaptation that is still the most widely used model in the field. It has been nicely mapped onto explicit (fast) and implicit (slow) components of adaptation (McDougle et al., 2015).

However, there have been suggestions that there are more than two states in the adaptation process, and that there may be multiple explicit and implicit components, potentially with different time constants.

Forano and Franklin (2019) showed that dual adaptation can be best explained by models with two fast components, and McDougle and Taylor (2019) showed two explicit strategies in visuomotor rotation: caching and mental rotation. Presumably, in our study, the group for which explicit eye explained only part of the explicit learning (No-Match group) used multiple explicit strategies, while the group where measures of explicit learning matched used one (Match-High group) or, perhaps, none (Match-Low group).

The question arises whether the components of the explicit adaptation that are reflected in explicit eye map to the explicit strategies identified by McDougle and Taylor (2019). In that study, the key difference in the strategies was that one strategy introduced a correlation between rotation and reaction time while the other did not.

Consequently, we examined reaction times in the different groups. We found that the group with the single explicit strategy (captured by gaze; Match-High group) had very long reaction times relative to the other groups. Interestingly, these subjects had longer reaction times in the baseline phase as well, suggesting that they were more carefully and explicitly controlled movers even during normal movement.

The reaction times of the No-Match group were much lower. That is, the No-Match group achieved explicit adaptation comparable to that of the Match-High group, but their explicit adaptation required less preparation time. The Match-Low subjects, who only adapted implicitly, had the fastest reaction times.

Together, we hypothesize that the explicit components reflected in explicit eye are the same components that drive longer reaction times. McDougle and Taylor (2019) identified this as the process of mental rotation and contrasted it with the low-reaction time mechanism of caching.

Links between the intended direction of movement and eye movements have been foreshadowed (Rentsch and Rand, 2014; Rand and Rentsch, 2016) and demonstrated explicitly (de Brouwer et al., 2018). Our study supports these earlier findings, although there are some technical issues that deserve consideration.

First, we followed the Rand and Rentsch (2016) study in using only end-point feedback rather than continuous presentation of the cursor. This simplified the eye movements and allowed us to determine that the fixations immediately before movement initiation provided the most reliable estimate of explicit adaptation.

The specific timing at which eye movements are considered has consequences. Our findings match those of de Brouwer et al. (2018) in that both studies find that eye movements reflect explicit adaptation.

An important difference in the findings relates to the timing at which eye movements are considered. de Brouwer et al. (2018) evaluate the fixation closest to the rotation angle. Our data support the basic statistical logic that such a measure will tend to be biased. The last fixation before movement onset was a more stable measure and is consistent with earlier results on the specific timing with which eye movements predict hand movements (Ariff et al., 2002).

This measure also allowed an unbiased quantification of explicit adaptation even in subjects with very little explicit adaptation, which is key for identifying the Match-Low group. This difference in the measures may be one reason why de Brouwer et al. (2018) did not identify the three different groups of subjects we found.

Last, we note that each of our measures (report, eye, and exclusion) measured either explicit adaptation or implicit adaptation, but not both. We then calculated the complementary adaptation by subtracting the measured component from the hand direction.

Much influential research in the field takes this approach: it assumes that hand direction is the simple sum of an explicit and an implicit component (Taylor et al., 2014; Huberdeau et al., 2015; McDougle et al., 2015; Christou et al., 2016; Leow et al., 2017).

However, this assumption has been questioned, and various efforts to validate it have been put forward including the use of inclusion trials in combination with exclusion trials (Werner et al., 2015; Neville and Cressman, 2018; Modchalingam et al., 2019). Exclusion trials test for explicit knowledge by asking subjects to stop using what they know.

Inclusion trials verify this ability to explicitly control behavior by asking subjects to go back to using what they know. Since, in most studies, inclusion trials show less adaptation than do the rotation trials that preceded them, it seems that the total behavior must involve some component that is more easily turned off than turned back on.

This is in line with the claim in this article that explicit knowledge may involve multiple components. We use inclusion trials to further explore this idea in the study by Maresch and Donchin (2019).

This study provides replicates and extends earlier findings that eye movements reflect an explicit strategy in visuomotor adaptation. It supports other reports demonstrating multiple explicit components in adaptation. It seems that some components of explicit adaptation are not reflected in the eye movements.

The components reflected in the eye movements are correlated with reaction time and may include the component identified by McDougle and Taylor (2019) as mental rotation. While eye movements may not be a perfect measure of explicit adaptation, they could be used to capture this component on a trial-by-trial basis without influencing the adaptation.


Source:
NYU

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Questo sito usa Akismet per ridurre lo spam. Scopri come i tuoi dati vengono elaborati.