Biomedical engineers can trace the shapes of active neurons

0
1954
Two-photon calcium imaging of intact medial frontal cortex. (A) Schematic illustration of in vivo two-photon imaging of the mFrC. The dotted square indicates the location at which the Z-stack images in (B) were acquired. Imaging stage was tilted so that the glass window was perpendicular to the optical axis (see Materials and methods). FrC, frontal cortex; PL, prelimbic area. Immersion water was placed between the objective and the cranial window, but is not illustrated for visual clarity. (B) Left, representative XZ image of mFrC expressing R-CaMP1.07. The images are maximum intensity projections of XYZ images along the Y axis. These were acquired in awake mice, and the multiple horizontal dark lines are motion artifacts. Scale bar, 200 µm. Middle and right, four XY images at the depths indicated by the number at the left. Scale bar, 100 µm. (C) Expression of R-CaMP1.07 (red) and cell nuclei (NeuroTrace 435, cyan) after imaging 11 fields at depths of 340–1140 µm for 5 days. The total duration of imaging was 200 min. Scale bar, 1 mm. (D) Left, representative time-averaged XY image of mFrC expressing R-CaMP1.07 at a depth of 1030 µm. Scale bar, 100 µm. Right, spatial distribution of neurons identified by the constrained non-negative matrix factorization (cNMF) algorithm. (E) Calcium transient traces for the numbered filled contours are shown in (D). These measurements were taken while the animal was quiet, more than 15 min after the conditioning experiment.

Biomedical engineers at Duke University have developed an automated process that can trace the shapes of active neurons as accurately as human researchers can, but in a fraction of the time.

This new technique, based on using artificial intelligence to interpret video images, addresses a critical roadblock in neuron analysis, allowing researchers to rapidly gather and process neuronal signals for real-time behavioral studies.

The research appeared this week in the Proceedings of the National Academy of Sciences.

To measure neural activity, researchers typically use a process known as two-photon calcium imaging, which allows them to record the activity of individual neurons in the brains of live animals.

These recordings enable researchers to track which neurons are firing, and how they potentially correspond to different behaviors.

While these measurements are useful for behavioral studies, identifying individual neurons in the recordings is a painstaking process.

Currently, the most accurate method requires a human analyst to circle every ‘spark’ they see in the recording, often requiring them to stop and rewind the video until the targeted neurons are identified and saved.

To further complicate the process, investigators are often interested in identifying only a small subset of active neurons that overlap in different layers within the thousands of neurons that are imaged.

This process, called segmentation, is fussy and slow.

A researcher can spend anywhere from four to 24 hours segmenting neurons in a 30-minute video recording, and that’s assuming they’re fully focused for the duration and don’t take breaks to sleep, eat or use the bathroom.

In contrast, a new open source automated algorithm developed by image processing and neuroscience researchers in Duke’s Department of Biomedical Engineering can accurately identify and segment neurons in minutes.

This video from two-photon imaging shows neurons firing in a mouse brain. Recordings like this enable researchers to track which neurons are firing, and how they potentially correspond to different behaviors. Credit: Yiyang Gong, Duke University


“As a critical step towards complete mapping of brain activity, we were tasked with the formidable challenge of developing a fast automated algorithm that is as accurate as humans for segmenting a variety of active neurons imaged under different experimental settings,” said Sina Farsiu, the Paul Ruffin Scarborough Associate Professor of Engineering in Duke BME.

“The data analysis bottleneck has existed in neuroscience for a long time – data analysts have spent hours and hours processing minutes of data, but this algorithm can process a 30-minute video in 20 to 30 minutes,” said Yiyang Gong, an assistant professor in Duke BME.

“We were also able to generalize its performance, so it can operate equally well if we need to segment neurons from another layer of the brain with different neuron size or densities.”

“Our deep learning-based algorithm is fast, and is demonstrated to be as accurate as (if not better than) human experts in segmenting active and overlapping neurons from two-photon microscopy recordings,” said Somayyeh Soltanian-Zadeh, a Ph.D. student in Duke BME and first author on the paper.

Deep-learning algorithms allow researchers to quickly process large amounts of data by sending it through multiple layers of nonlinear processing units, which can be trained to identify different parts of a complex image.

In their framework, this team created an algorithm that could process both spatial and timing information in the input videos. They then ‘trained’ the algorithm to mimic the segmentation of a human analyst while improving the accuracy.

The advance is a critical step towards allowing neuroscientists to track neural activity in real time. Because of their tool’s widespread usefulness, the team has made their software and annotated dataset available online.

Gong is already using the new method to more closely study the neural activity associated with different behaviors in mice. By better understanding which neurons fire for different activities, Gong hopes to learn how researchers can manipulate brain activity to modify behavior.

“This improved performance in active neuron detection should provide more information about the neural network and behavioral states, and open the door for accelerated progress in neuroscience experiments,” said Soltanian-Zadeh.

More information: “Fast and Robust Active Neuron Segmentation in Two-Photon Calcium Imaging Using Spatio-Temporal Deep-Learning,” Somayyeh Soltanian-Zadeh, Kaan Sahingur, Sarah Blau, Yiyang Gong, and Sina Farsiu. Proceedings of the National Academy of Sciences, April 12, 2019. DOI: 10.1073/pnas.1812995116
Journal information: Proceedings of the National Academy of Sciences
Provided by Duke University

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Questo sito utilizza Akismet per ridurre lo spam. Scopri come vengono elaborati i dati derivati dai commenti.