- American Physical Society Sites
- Meetings & Events
- Policy & Advocacy
- Careers In Physics
- About APS
- Become a Member
Scientists have long sought to develop a workable model to help them understand the complexities of the neural encoding of sensory information in living organisms. Now, a new framework is emerging for neuronal systems, based on information theory. Known as stimulus reconstruction, or reverse reconstruction, this theoretical model underlies most of the latest experimental studies taking place in this area.
"Our understanding of how the activities of neural systems lead to adaptive and intelligent behavior is still in a primitive state," said John W. Clark of Washington University, who chaired a Division of Biological Physics Tuesday morning session on the subject at the APS March Meeting in St. Louis. "However, it appears that a unified framework for neural computation is emerging, grounded in fundamental principles of stochastic systems and analysis, information theory, and statistical inference." Information theory derives the mathematical laws that govern information-manipulating devices such as audio equipment and telecommunications systems.
According to Clark, traditional approaches to the problem of neural coding have emphasized determination of the transformation leading from a known stimulus to average neural responses. However, organisms face nearly the opposite task: they must extract information about an unknown time-dependent stimulus from a short sequence of action-potential pulses sent by a receptor neuron.
To help explain such a nonlinear system, Charles Anderson, of Washington University, has developed a unified approach to issues of neural coding, computation, and systems integration, based on the hypothesis that ensembles of neurons collectively encode and process probability density functions of analog quantities. "It's like taking an audio signal and turning it into a series of samples that are stored on a compact disc," said Anderson of his theoretical model. "The CD player then turns those samples back into a time-dependent signal that goes into your amplifier and out of your speaker. In a similar way, the nervous system also encodes and decodes information that passes from one neuron to another."
The explicit encoding of uncertainty at a very low level in neuronal systems contrasts strongly with standard electronic computers, where extreme measures are taken to ensure precise computation at all levels of processing. "The application of information theory clears up a lot of the ambiguities and questions that people have about the differences between digital computers and neuronal systems," said Anderson.
Anderson's theoretical models are being tested by Washington University neurophysiologist David van Essen, who is using macaque monkeys to study aspects of visual processing.
Studying single neurons in the fly visual system, researchers at Stanford University applied stochastic systems analysis to devise algorithms for real-time stimulus estimation. The algorithm models real-time analog signal processing by spiking neurons. The basic idea of reconstructing a stimulus from a spike train was subsequently extended from a single neuron to an ensemble of neurons with broadly tuned responses.
To determine characteristics of encoded spike trains, Fred Rieke, a postdoctoral fellow at Stanford University, conducted a series of experiments involving the bullfrog auditory system, which is similar to that of humans in that it uses a system of internal codes to translate sounds into a form the brain can understand.
Rieke's group monitored the electrical activity in bullfrog auditory neurons while the frogs listened to two kinds of sounds: white noise and frog calls. The sequences of spikes encode information about the sounds, and the spike pattern creates a characteristic "fingerprint" for each sound. The group found that the representation of frog calls in these spike trains was much more accurate than the representation of white noise.
Rieke said of the results: "Researchers have known that organisms have a way of filtering out unimportant information, but it was thought that most of this selectivity takes place in the brain. It now looks as though this filtering may take place earlier, in the sensory nerves that carry the information to the brain."
The reverse reconstruction strategy in its ensemble version has also been applied by researchers at the University of California at Berkeley to study sensory coding in the cricket cercal system.
Berkeley collaborator, John P. Miller, identified a subset of four neurons that is capable of determining the direction of low-velocity wind pulses with great accuracy and reliability. To do this, they presented a set of air current stimuli to a cricket within a miniature wind tunnel, and recorded the spiking activity of the neurons elicited in response to those stimuli. Using the analytical tools of information theory, they were able to characterize aspects of the information-carrying capability of the nerve cells in terms of bits and baud rates _ the same measures used for performance specifications of modems and other electronic signal transmission devices.
The team determined that the transfer rates for information about air current velocity and the rates by the rate at which the neurons were firing spikes, obtain a measurement of information in terms of one and three bits per spike, remarkably high rates that indicate a high level of encoding efficiency.
"These studies are of general interest, not only because of the importance of understanding the nature of the neural code itself, but also because of the very important constraints a knowledge of the code can place on the derivation and testing of physiological models for the mechanism's underlying neural function," said Miller.
Jacob Levin, also at Berkeley, modified the cricket experiments slightly by introducing varying amplitudes of broadband random background noise air currents to the process. While in general systems noise is considered to be detrimental to performance, there are systems in which a certain amount of externally applied noise can actually improve performance, attributable to a phenomenon known as stochastic resonance, in which the addition of random noise to a small output signal can increase the transmission quality of the signal (see related story, page 4).
Levin found that with a single frequency tone, the output signal-to-noise ratio increased with added input noise to a maximum level, and then fell as the input noise amplitude was increased beyond that level. Further experiments using broadband signals _ as opposed to a single frequency _ revealed that these improvements were observable over the entire frequency operating range of the neurons, and for almost an entire order of magnitude of near-threshold signal amplitudes. "For the first time, an enhancement of signal encoding was observed over a biologically relevant range of signal frequencies and amplitudes, and we have quantified it in a mathematically rigorous way, through the tools of information theory," he said.
©1995 - 2024, AMERICAN PHYSICAL SOCIETY
APS encourages the redistribution of the materials included in this newspaper provided that attribution to the source is noted and the materials are not truncated or changed.