Visual area constitutes about 25 % of the cortex in humans with approximately 5 billion neurons.
These regions are involved in processing of multitude of informations (shape, orientation, color, movement, size etc) resulting from the visual pathways, thus making up an image applied to retina.
The cortical area of higher mammals such as cats, monkeys and humans is generally divided into modules of selectivity (e.g. the visual cortex is divided into areas of selectivity called orientation columns).
Several characteristics of the visual system of mammals appear to be common to many species [19, 20], though the neurons are distributed in a salt and pepper fashion in the visual cortex of lower animals such as rats and mice, lacking the orientation domains [21, 22].
Research on animal models is used on a large scale to study and investigate the structure and function of the visual system.
Monkeys, cats, and mice are commonly used in neurophysiological experiments for understanding cortical mechanisms in general and visual pathways in particular .
From retina to visual areas
Visual perception begins in the retina where the received light is transformed into electrical signal by a biochemical cascade produced in the rods and cones.
The retinal ganglion cells relay the message to the lateral geniculate body (LGN) which consists of six layers .
Each layer receives information from the retinal hemi-field of one eye.
The axon terminals of ganglion cells which project on each layer form a precise retinotopic map.
This retinotopy denotes the spatial organization of neuronal responses to visual stimuli. I
ndeed, in many parts of the brain, neurons that respond to stimulation from a given portion of the visual field are located right next to the neurons whose receptive fields cover adjacent portions.
Therefore, all the neurons in these brain regions form a topographical map of the visual field from its projection onto the retina.
From the LGN, axons are organized into thalamocortical fibres forming the optic radiations.
These optic radiations project onto the cortex in specialized visual areas.
The distribution of fibres in the cortex can reproduce the visual field on the cortical layer, and the stimulation of a small cortical area leads to the appearance of bright spots called ‘phosphenes’  in a specific location of the visual field.
Visual areas begin in the occipital lobe, and the primary visual cortex or area 17 is the main entrance to cortex for thalamic relay cells .
There is another system of alternating columns, which corresponds to the separation of afferents from both eyes.
These are the ocular dominance columns.
These bands are particularly pronounced at the cortical layer IV, which receives the afferent endings of the lateral geniculate nucleus.
Many findings have led to the discovery of thirty different cortical areas that contribute to visual perception.
The primary areas (V1) and secondary areas (V2) are surrounded by many other tertiary or associative visual areas such as V3, V4, V5 (or MT) involved in processing various attributes of trigger features [18, 34].
Area V5 or MT (middle temporal) is an area where majority of cells are sensitive to motion and direction, and none of which are selective to color .
Moreover, the parallel organization of visual system is involved in the establishment of two major visual pathways: Ventral and dorsal pathways which are indispensable for the object recognition [38, 39].
It is involved in the processing of information on the characteristics of the objects (shapes, colours, materials), that is, object recognition including faces.
Orange part corresponds to the dorsal pathway in the cortex ending in the parietal lobe [38, 40]. This path is associated with spatial vision (action / location) of objects, and is involved in processing of action in space.
Pyramidal cells and interneurons
Two types of neurons are mainly observed: pyramidal cells and interneurons which can be physiologically separated and are the focus of interest in this chapter, that is, how they modify their properties and change linkage with each other post-adaptation.
Pyramidal cells are excitatory neurons projecting onto other brain regions [43, 44] whereas stellate cells which are the recipient cells from the relay cells of the LGN correspond to the local excitatory interneurons .
Each layer has specific cell types and connectivity in primary visual cortex. Layer IV contains many stellate cells, small neurons with dendrites arranged radially around the cell body.
Pyramidal cells are found in layers II-III, V and VI and are the only type of neurons that send axons outside the cortex.
These neurons exhibit two levels of their dendritic extension: basal level close to the cell body and relatively long apical dendritic branches extending sometimes over the entire thickness of the cortex.
Figure 3 illustrates a typical example of cells distinguished based on their waveforms: fast spike and regular spike. Figure 3a corresponds to a fast spike with steeper ascending slope of the action potential and represents the putative interneuron , whereas Figure 3b corresponds to a regular spike which exhibits a slower ascending slope and represents the putative pyramidal cell .
Why do our eyes tend to be drawn more to some shapes, colors, and silhouettes than others?
For more than half a century, researchers have known that neurons in the brain’s visual system respond unequally to different images – a feature that is critical for the ability to recognize, understand, and interpret the multitude of visual clues surrounding us.
For example, specific populations of visual neurons in an area of the brain known as the inferior temporal cortex fire more when people or other primates – animals with highly attuned visual systems – look at faces, places, objects, or text.
But exactly what these neurons are responding to has remained unclear.
Now a small study in macaques led by investigators in the Blavatnik Institute at Harvard Medical School has generated some valuable clues based on an artificial intelligence system that can reliably determine what neurons in the brain’s visual cortex prefer to see.
A report of the team’s work was published today in Cell.
The vast majority of experiments to date that attempted to measure neuronal preferences have used real images.
But real images carry an inherent bias:
They are limited to stimuli available in the real world and to the images that researchers choose to test.
The AI-based program overcomes this hurdle by creating synthetic images tailored to the preference of each neuron.
Will Xiao, a graduate student in the Department of Neurobiology at Harvard Medical School, designed a computer program that uses a form of responsive artificial intelligence to create self-adjusting images based on neural responses obtained from six macaque monkeys.
To do so, he and his colleagues measured the firing rates from individual visual neurons in the brains of the animals as they watched images on a computer screen.
Over the course of a few hours, the animals were shown images in 100-millisecond blips generated by Xiao’s program.
The images started out with a random textural pattern in grayscale.
Based on how much the monitored neurons fired, the program gradually introduced shapes and colors, morphing over time into a final image that fully embodied a neuron’s preference.
Because each of these images is synthetic, Xiao said, it avoids the bias that researchers have traditionally introduced by only using natural images.
A new computer program uses artificial intelligence to determine what visual neurons like to see. The approach could shed light on learning disabilities, autism spectrum disorders, and other neurologic conditions. The image is adapted from the Harvard news release.
“At the end of each experiment,” he said, “this program generates a super-stimulus for these cells.”
The results of these experiments were consistent over separate runs, explained senior investigator Margaret Livingstone: Specific neurons tended to evolve images through the program that weren’t identical but were remarkably similar.
Some of these images were in line with what Livingstone, the Takeda Professor of Neurobiology at HMS, and her colleagues expected. For example, a neuron that they suspected might respond to faces evolved round pink images with two big black dots akin to eyes. Others were more surprising. A neuron in one of the animals consistently generated images that looked like the body of a monkey, but with a red splotch near its neck. The researchers eventually realized that this monkey was housed near another that always wore a red collar.
“We think this neuron responded preferentially not just to monkey bodies but to a specific monkey,” Livingstone said.
Not every final image looked like something recognizable, Xiao added. One monkey’s neuron evolved a small black square. Another evolved an amorphous black shape with orange below.
Livingstone notes that research from her lab and others has shown that the responses of these neurons are not innate — instead, they are learned through consistent exposure to visual stimuli over time. When during development this ability to recognize and fire preferentially to certain images arises is unknown, Livingstone said. She and her colleagues plan to investigate this question in future studies.
Learning how the visual system responds to images could be key to better understanding the basic mechanisms that drive cognitive issues ranging from learning disabilities to autism spectrum disorders, which are often marked by impairments in a child’s ability to recognize faces and process facial cues.
“This malfunction in the visual processing apparatus of the brain can interfere with a child’s ability to connect, communicate, and interpret basic cues,” said Livingstone. “By studying those cells that respond preferentially to faces, for example, we could uncover clues to how social development takes place and what might sometimes go awry.”
Funding: The research was funded by National Institutes of Health and National Science Foundation.ABOUT THIS NEUROSCIENCE RESEARCH ARTICLE
Christy Brownlee – Harvard
The image is adapted from the Harvard news release.
Original Research: Open access
“Evolving Images for Visual Neurons Using a Deep Generative Network Reveals Coding Principles and Neuronal Preferences”. Carlos R. Ponce, Will Xiao, Peter F. Schade, Till S. Hartmann, Gabriel Kreiman, Margaret S. Livingstone.