AI can identify cancer cells in blood in milliseconds

Deep cytometry: application of deep learning in cell sorting and flow cytometry. A microfluidic channel with hydrodynamic focusing mechanism uses sheath fluid to align the cells in the center of field-of-view. The rainbow pulses formed by the time-stretch imaging system capture line images of the cells in the channel, containing blur-free quantitative label-free images of the cells flowing at a high speed. The output waveforms of the time-stretch imaging system are directly passed to a deep neural network without any signal processing. The network achieves rapid cell classification with high accuracy, fast enough to make decisions before the cells reach the sorting mechanism. Different types of cells are categorized and charged with different polarity charges so that they can be separated into different collection tubes.

Researchers at UCLA and NantWorks have developed an artificial intelligence-powered device that detects cancer cells in a few milliseconds – hundreds of times faster than previous methods.

With that speed, the invention could make it possible to extract cancer cells from blood immediately after they are detected, which could in turn help prevent the disease from spreading in the body.

A paper about the advance was published in the journal Nature Scientific Reports.

The approach relies on two core technologies: deep learning and photonic time stretch. Deep learning is a type of machine learning, an artificial intelligence technique in which algorithms are “trained” to perform tasks using large volumes of data.

In deep learning, algorithms called neural networks are modeled after how the human brain works.

Compared to other types of machine learning, deep learning has proven to be especially effective for recognizing and generating images, speech, music and videos.

Photonic time stretch is an ultrafast measurement technology that was invented at UCLA.

Photonic time stretch instruments use ultrashort laser bursts to capture trillions of data points per second, more than 1,000 times faster than today’s fastest microprocessors.

The technology has helped scientists discover rare phenomena in laser physics and invent new types of biomedical instruments for 3-D microscopy, spectroscopy and other applications.

“Because of the extreme volume of precious data they generate, time-stretch instruments and deep learning are a match made in heaven,” said senior author Bahram Jalali, a UCLA professor of electrical and computer engineering at the UCLA Samueli School of Engineering and a member of the California NanoSystems Institute at UCLA.

The system also uses a technology called imaging flow cytometry.

Cytometry is the science of measuring cell characteristics; in imaging flow cytometry, those measurements are obtained by using a laser to take images of the cells one at a time as they flow through a carrier fluid.

Although there are already techniques for categorizing cells in imaging flow cytometry, those techniques’ processing steps occur so slowly that devices don’t have time to physically separate cells from one another.

Building on their previous work, Jalali and his colleagues developed a deep learning pipeline which solves that problem by operating directly on the laser signals that are part of the imaging flow cytometry process, which eliminates the time-intensive processing steps of other techniques.

“We optimized the design of the deep neural network to handle the large amounts of data created by our time-stretch imaging flow cytometer—upgrading the performance of both the software and instrument,” said Yueqin Li, a visiting doctoral student and the paper’s first author.

Ata Mahjoubfar, a UCLA postdoctoral researcher and a co-author of the paper, said the technique allows the instrument to determine whether a cell is cancerous virtually instantaneously.

“We don’t need to extract biophysical parameters of the cells anymore,” he said. “Instead, deep neural networks analyze the raw data itself extremely quickly.”

In order for label-free real-time imaging flow cytometry to become a feasible methodology, imaging, signal processing, and data analysis need to be completed while the cell is traveling the distance between the imaging point (field-of-view of the camera) in the microfluidic channel and the cell sorting mechanism (Fig. 6).

During imaging, the time-stretch imaging system is used to rapidly capture the spatial information of cells at high throughput. A train of rainbow flashes illuminates the target cells as line scans.

The features of the cells are encoded into the spectrum of these optical pulses, representing one-dimensional frames. Pulses are stretched in a dispersive optical fiber, mapping their spectrum to time.

They are sequentially captured by a photodetector, and converted to a digital waveform, which can be analyzed by the neural network.

The imaging and data capture take less than 0.1 ms for each waveform element, which covers a field-of-view of 25 μm in the channel direction, often containing only one cell surrounded by the suspension buffer or no cell. So, the delay in making a decision for cell sorting is dominated by the data processing time of the neural network.

To quickly classify the target cells based on the collected data, we demonstrate the utility of analyzing waveforms directly by a deep neural network, referred to as deep cytometry.

The classification model is trained offline using datasets for the target cell types, and then used in an online system for cell sorting.

The processing time of this model (the latency for inference of a single-example batch by a previously trained model) is 23.2 ms per example using an Intel Xeon CPU (8 cores), 8.6 ms per example on an NVIDIA Tesla K80 GPU, and 3.6 ms per example on an NVIDIA Tesla P100 GPU (Table 2).

Thus, for our setup with the cell flow rate of 1.3 m/s in the microfluidic channel, the cells travel 30.2 mm for the Intel CPU, 11.2 mm for the NVIDIA K80 GPU, or 4.7 mm for the NVIDIA P100 GPU before the classification decision is made.

So, the microfluidic channels should be at least as long as these cell travel distances. Fabrication of microfluidic channels beyond these length limits is very practical, and the cells can remain ordered within such short distances.

Therefore, the type of each cell can be determined by our model in real-time before it reaches the cell sorter. Oftentimes the flow speed is less than our setup, and the length limitation is further relaxed.

Besides the time-stretch imaging signals used in the demonstrations here, our deep learning approach for real-time analysis of flow cytometry waveforms, namely deep cytometry, can also be applied to the signals captured by other sensors such as CMOS (complementary metal-oxide semiconductor) or CCD (charge-coupled device) imagers, photomultiplier tubes (PMTs), and photodiodes.

More information: Yueqin Li et al. Deep Cytometry: Deep learning with Real-time Inference in Cell Sorting and Flow Cytometry, Scientific Reports(2019). DOI: 10.1038/s41598-019-47193-6

Journal information: Scientific Reports
Provided by University of California, Los Angeles


Please enter your comment!
Please enter your name here

Questo sito usa Akismet per ridurre lo spam. Scopri come i tuoi dati vengono elaborati.