A team of researchers, led by Case Western Reserve University scientists and technicians using the Microsoft HoloLens mixed reality platform, has created what is believed to be the first interactive holographic mapping system of the axonal pathways in the human brain.
The project, described by researchers as a “blending of advanced visualization hardware, software development and neuroanatomy data,” is expected to have a wide range of scientific, clinical and educational applications and further a collaborative interaction between neuroanatomists and brain-imaging scientists.
For starters, it almost instantly becomes “the foundation for a new holographic neurosurgical navigation system” for Deep Brain Stimulation (DBS) and is being dubbed “HoloDBS’ by the team, said lead researcher Cameron McIntyre, the Tilles-Weidenthal Professor of Biomedical Engineering at the Case Western Reserve University School of Medicine.
“More than 100 clinicians have had a chance to beta test this so far and the excitement around the technology has been exceptional,” McIntyre said, adding that the method is already dramatically advancing scientists’ understanding of the complexities associated with certain, targeted brain surgeries.
The new research incorporates decades of valuable, but disconnected, neural data from dozens of sources and transforms them into a fully three-dimensional and interactive visualization. Users of the technology, including neural engineers, neuroanatomists, neurologists, and neurosurgeons, are able to see both the animated “atlas” of the brain via the HoloLens headset—and the axonal connections in front of them.
“The cool thing about this is that we have been able to integrate decades of neuro-anatomical knowledge into the context of the most modern brain visualization techniques,” McIntyre said.
“We’re taking all of that anatomical knowledge and putting it into the hands of users in an entirely new and useful format.
McIntyre worked alongside radiology professor Mark Griswold, who is faculty leader of Microsoft HoloLens education-related initiatives and directs the Interactive Commons, a university-wide entity that aims to help faculty, staff and students use a range of visualization technologies to enhance teaching and research. Griswold also led the team that developed the HoloAnatomy app.
Others on the project included Mikkel Petersen, a postdoctoral fellow in the McIntyre Lab, and world expert neuroanatomists from the University of Rochester, Universite Laval in Quebec City, Quebec, Emory University and the University of Pittsburgh.

Neuron paper details project
The process is described in research being published online in the journal Neuron.
The project focuses on visualizing the precise axon pathways in the brain. Axon pathfinding is a subfield of neural development which researches how neurons send out axons to reach the correct targets in the brain.
Researchers focused on the subthalamic region of the brain, a common surgical target for deep brain stimulation, but an area of the brain that has been highly problematic for the current best technology, known as tractographic reconstructions.
Tractography, known for revealing colorful “brainbows” inside the human brain, has been used in hospitals for about 20 years. It visually represents the nerve tracts in the brain by using data collected by diffusion MRI, presenting the information in two- and three-dimensional images called tractograms.
The Case Western Reserve team advanced that technology not only by making it truly three-dimensional, but interactive by asking a group of expert neuroanatomists to “interactively define axonal trajectories of the cortical, basal ganglia, and cerebellar systems” while wearing the HoloLens headset.
“In doing so, we have produced what is the first anatomically realistic model of the major axonal pathways in the human subthalamic region,” McIntyre said. “This is just the first step and can be repeated throughout the brain.”
More information: Mikkel V. Petersen et al. Holographic Reconstruction of Axonal Pathways in the Human Brain, Neuron (2019). DOI: 10.1016/j.neuron.2019.09.030
A major challenge to reaching a global understanding of the functional organization of the human brain is that each neuroimaging experiment only probes a small number of cognitive processes. Cognitive neuroscience is faced with a profusion of findings relating specific psychological functions to brain activity.
These are like a collection of anecdotes that the field must assemble into a comprehensive description of the neural basis of mental functions, akin to “playing twenty questions with nature” [1]. However, maps from individual studies are not easily assembled into a functional atlas.
On the one hand, the brain recruits similar neural territories to solve very different cognitive problems. For instance, the intra-parietal sulcus is often studied in the context of spatial attention; however, it is also activated in response to mathematical processing [2], cognitive control [3], and social cognition and language processing [4].
On the other hand, aggregating brain responses across studies to refine descriptions of the function of brain regions faces two challenges: First, experiments are often quite disparate and each one is crafted to single out a specific psychological mechanism, often suppressing other mechanisms. Second, standard brain-mapping analyses enable conclusions on responses to tasks or stimuli, and not on the function of given brain regions.
Cognitive subtraction, via the opposition of carefully-crafted stimuli or tasks, is used to isolate differential responses to a cognitive effect.
However, scaling this approach to many studies and cognitive effects leads to neural activity maps with little functional specificity, hard to assemble in an atlas of cognitive function. Indeed, any particular task recruits many mental processes; while it may sometimes be possible to cancel out all but one process across tasks (e.g. through the use of conjunction analysis [5]), it is not feasible to do this on a large scale.
Furthermore, it can be difficult to eliminate all possible confounds between tasks and mental processes. An additional challenge to the selectivity of this approach is that, with sufficient statistical power, nearly all regions in the brain will respond in a statistically significant way to an experimental manipulation [6].
The standard approach to the analysis of functional brain images maps the response of brain regions to a known psychological manipulation [7].
However, this is most often not the question that we actually wish to answer. Rather, we want to understand the mapping between brain regions/networks and psychological functions (i.e. “what function does the fronto-parietal network implement?”).
If we understood these mappings, then in theory we could predict the mental state of an individual based solely on patterns of activation; this is often referred to as reverse inference [8], because it reverses the usual pattern of inference from mental state to brain activation.
Whereas informal reverse inference (e.g. based on a selective review of the literature) can be highly biased, it is increasingly common to use meta-analytic tools such as Neurosynth [9] to perform formal reverse inference analyses (also know as decoding). However, these inferences remain challenging to interpret due to the trade-off between breadth and specificity that is necessary to create a sufficiently large database (e.g. see discussion in [10, 11]).
The optimal basis for brain decoding would be a large database of task fMRI datasets spanning a broad range of mental functions. Previous work has demonstrated that it is possible to decode the task being performed by an individual, in a way that generalizes across individuals [12], but this does not provide insight into the specific cognitive functions being engaged, which is necessary if we wish to infer mental functions associated with novel tasks.
The goal of decoding cognitive functions rather than tasks requires that the data are annotated using an ontology of cognitive functions [13–15], which can then become the target for decoding. Some recent work has used a similar approach in restricted domains, such as pain [16], and was able to isolate brain networks selective to physical pain. Extending this success to the entire scope of cognition requires modeling a broad range of experiments with sufficient annotations to serve as the basis for decoding.
To date, the construction of human functional brain atlases has primarily relied upon the combination of resting-state fMRI and coordinate-based meta-analyses.
This approach is attractive because of the widespread availability of resting-state fMRI data (from which brain functional networks can be inferred through statistical approaches [17]), and the ability to link function to structure through the use of annotated coordinate-based data (such as those in the BrainMap [18] and Neurosynth [9] databases).
This approach has identified a set of large-scale networks that are consistently related to specific sets of cognitive functions [19, 20], and provides decompositions of specific regions [21, 22].
However, resting-state analysis is limited in the set of functional states that it can identify [23], and meta-analytic databases are limited in the specificity of their annotation of task data, as well as in the quality of the data, given that it is reconstructed merely from activation coordinates reported in published papers.
A comprehensive functional brain atlas should link brain structures and cognitive functions in both forward and reverse inferences [7].
To build such a bilateral mapping, we introduce the concept of “ontology-based decoding,”, in which the targets of decoding are specific cognitive features annotated according to an ontology.
This idea was already present in [9, 12, 24]; here we show how an ontology enables scaling it to many cognitive features, to increase breadth. In the present case, we use the Cognitive Paradigm Ontology (CogPO) [15], that provides a common vocabulary of concepts related to psychological tasks and their relationships (see S1 Text Distribution of terms in our database).
Forward inference then relies on ontology-defined contrasts across experiments, while reverse inference is performed using an ontology-informed decoder to leverage this specific set of oppositions (see Fig 1 and methodological details). We apply these forward and reverse inferences to the individual activation maps of a large task-fMRI database: 30 studies, 837 subjects, 196 experimental conditions, and almost 7000 activation maps (see S1 Text Distribution of terms in our database). We use studies from different laboratories, that cover various cognitive domains such as language, vision, decision making, and arithmetics.
We start from the raw data to produce statistical brain maps, as this enables homogeneous preprocessing and thorough quality control. The results of this approach demonstrate that it is possible to decode specific cognitive functions from brain activity, even if the subject is performing a task not included in the database.

Our approach characterizes the task conditions that correspond to each brain image with terms from a cognitive ontology. Forward inference maps differences between brain responses for a given term and its neighbors in the ontology, i.e. closely related psychological notions. Reverse inference is achieved by predicting the terms associated with the task from brain activity. The figure depicts the analysis of visual object perception tasks with motor response. A forward inference captures brain responses in motor, primary visual and high-level visual areas. Reverse inference captures which regions or neural substrate are predictive of different terms, discarding common response to different tasks, here in the primary visual cortex.
https://doi.org/10.1371/journal.pcbi.1006565.g001
Journal information: Neuron
Provided by Case Western Reserve University