Imagine an attractive person walking toward you.
Do you look up and smile? Turn away? Approach but avoid eye contact?
The setup is the same, but the outcomes depend entirely on your “internal state,” which includes your mood, your past experiences, and countless other variables that are invisible to someone watching the scene..
So how can an observer decode internal states by watching outward behaviors?
That was the challenge facing a team of Princeton neuroscientists. Rather than tackling the intricacies of human brains, they investigated fruit flies with fewer behaviors and, one imagines, fewer internal states. They built on prior work studying the songs and movements of amorous Drosophila melanogaster males.
“Our previous work was able to predict a portion of singing behaviors, but by estimating the fly’s internal state, we can accurately predict what the male will sing over time as he courts a female,” said Mala Murthy, a professor of neuroscience and the senior author on a paper appearing in today’s issue of Nature Neuroscience with co-authors Jonathan Pillow, a professor of psychology and neuroscience, and PNI postdoctoral research fellow Adam Calhoun.
Their models use observable variables like the speed of the male or his distance to the female.
The researchers identified three separate types of songs, generated by wing vibration, plus the choice not to sing. They then linked the song decisions to the observable variables.
“This is an important breakthrough,” said Murthy. “We anticipate that this modeling framework will be widely used for connecting neural activity with natural behavior.”
The key was building a machine learning model with a new expectation: animals don’t change their behaviors at random, but based on a combination of feedback that they are getting from the female and the state of their own nervous system.
Using their new method, they discovered that males pattern their songs in three distinct ways, each lasting tens to hundreds of milliseconds.
They named each of the three states: “Close,” when a male is closer than average to a female and approaching her slowly; “Chasing,” when he is approaching quickly; and “Whatever,” when he is facing away from her and moving slowly.
The researchers showed that these states correspond to distinct strategies, and then they identified neurons that can control how the males switch between strategies.
“This is an important breakthrough,” said Murthy. “We anticipate that this modeling framework will be widely used for connecting neural activity with natural behavior.”
Funding: This work was funded by the Simons Foundation (AWD494712, AWD1004351, and AWD543027), the Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative at the National Institutes of Health (NS104899), and the Howard Hughes Medical Institute.
Recent years have seen a renaissance in machine learning and machine vision, led by neural network algorithms that now achieve impressive performance on a variety of challenging object recognition and image understanding tasks1–3. Despite this rapid progress, the performance of machine vision algorithms continues to trail humans in many key domains, and tasks that require operating with limited training data or in highly cluttered scenes are particularly difficult for current algorithms4–7.
Moreover, the patterns of errors made by today’s algorithms differ dramatically from those of humans performing the same tasks8,9, and current algorithms can be “fooled” by subtly altering images in ways that are imperceptible to humans, but which lead to arbitrary misclassifications of objects10–12.
Thus, even when algorithms do well on a particular task, they do so in a way that differs from how humans do it and that is arguably more brittle.
The human brain is a natural frame of reference for machine learning, because it has evolved to operate with extraordinary efficiency and accuracy in complicated and ambiguous environments. Indeed, today’s best algorithms for learning structure in data are artificial neural networks13–15, and strategies for decision making that incorporate cognitive models of Bayesian reasoning16 and exemplar learning17 are prevalent.
There is also growing overlap between machine learning and the fields of neuroscience and psychology: In one direction, learning algorithms are used for fMRI decoding18–21, neural response prediction22–26, and hierarchical modeling27–29. Concurrently, machine learning algorithms are also leveraging biological concepts like working memory30, experience replay31, and attention32,33 and are being encouraged to borrow more insights from the inner workings of the human brain34. Here we propose an even more direct connection between these fields: we ask if we can improve machine learning algorithms by explicitly guiding their training with measurements of brain activity, with the goal of making the algorithms more human-like.
Our strategy is to bias the solution of a machine learning algorithm so that it more closely matches the internal representations found in visual cortex. Previous studies have constrained learned models via human behavior8,35, and one work introduced a method to determine a mapping from images to “brain-like” features extracted from EEG recordings36. Furthermore, recent advances in machine learning have focused on improving feature representation, often in a biologically-consistent way26, of different kinds of data.
However, no study to date has taken advantage of measurements of brain activity to guide the decision making process of machine learning.
While our understanding of human cognition and decision making is still limited, we describe a method with which we can leverage the human brain’s robust representations to guide a machine learning algorithm’s decision boundary.
Our approach weights how much an algorithm learns from each training exemplar, roughly based on the “ease” with which the human brain appears to recognize the example as a member of a class (i.e., an image in a given object category). This work builds on previous machine learning approaches that weight training8,37, but here we propose to do such weighting using a separate stream of data, derived from human brain activity.
Below, we describe our biologically-informed machine learning paradigm in detail, outline an implementation of the technique, and present results that demonstrate its potential to learn more accurate, biologically-consistent decision-making boundaries. We trained supervised classification models for four, visual object categories (i.e., humans, animals, places, foods), weighting individual training images by values derived from fMRI recordings in human visual cortex viewing those same images; once trained, these models classify images without the benefit of neural data.
Our “neurally-weighted” models were trained on two kinds of image features: 1., histogram of oriented gradients (HOG) features38 and 2., convolutional neural network (CNN) features (i.e., 1000-dimensional, pre-softmax activations from AlexNet13 pre-trained on the ImageNet dataset1). HOG features were the standard, off-the-shelf image feature representation before the 2012 advent of powerful CNNs1, while CNNs pre-trained on large datasets like ImageNet are known to be strong, general image features, which can transfer well to other tasks39,40.
While machine vision research has largely focused on improving feature representation in order to make gains in various, challenging visual tasks, another complementary approach, which our paradigm falls under, is to improve the decision making process. Thus, we hypothesized that our decision boundary-biasing paradigm would yield larger gains when coupled with the weaker HOG features, thereby enabling HOG features to be more competitive to the stronger CNN features.
Finally, these models were then evaluated for improvement in baseline performance as well as analyzed to understand which regions of interest (ROIs) in the brain had greater impact on performance.
Source:
Princeton University
Media Contacts:
Liz Fuller-Wright – Princeton University
Image Source:
The image is in the public domain.
Original Research: Closed access
“Unsupervised identification of the internal states that shape natural behavior”. Adam J. Calhoun, Jonathan W. Pillow & Mala Murthy.
Nature Neuroscience doi:10.1038/s41593-019-0533-x.