Hold for NATURE embargo: Thursday, July 31, 1997

Media Contact: Warren R. Froelich, (619) 534-8564, [email protected]

STRATEGY USED BY ARTIFICIAL NEURAL NETS DISCOVERED IN MEMORY SYSTEMS OF BRAIN TISSUE

In a surprising twist, a team of neurobiologists at the University of California, San Diego has discovered that a powerful strategy used by artificial neural networks for learning and memory has a counterpart in a living brain.

The work--published in the current issue of the British journal Nature--not only provides a biological basis for this self-teaching strategy, called back-prop algorithm, it counters critics who have claimed that the algorithm, though clever, is not brain-like.

"We've demonstrated that at least part of this very popular, very powerful, computer algorithm used in artificial neural networks is a biological plausible mechanism used by the brain," said Mu-ming Poo, the Stephen W. Kuffler Professor of Biology at UCSD, who led a team that included postdoctoral fellow Reiko Maki Fitzsimonds and graduate student Hong-jun Song.

In artificial neural networks, information is processed simultaneously along multiple paths through layers of nerve cell (neuron-like) units. Early neural nets consisted of two layers of units, with a single set of connections between them. Newer nets have three layers: an input layer, an output layer, and a middle, or "hidden" layer that's analogous to the vast and largely uncharted domains in the brain where our representations of the world are formed and processed.

Today's networks are not designed to be programmed in the traditional sense, but to be trained, to learn by trial and error. To accomplish these tasks, most neural nets rely on back-prop, short for back propagation of errors.

Back-prop refers to how the network corrects mistakes. In essence, the circuit learns by matching an output signal against a desired response supplied by a device in the model called a "teacher." When the output units do not yield the desired responses, the teacher re-adjusts the strength of the connections between the layers according to the magnitude of the output errors. After repeated trials, the network finally gets it right, and the teacher leaves it alone. The result is a network that has learned and can perform the desired task.

Though intriguing to members of the neural network community, many biologists felt this system added nothing toward the understanding of how the brain works since, they said, electrochemical signals in nerve cells can't run backward from output to input.

The new UCSD research contradicts this notion by showing that signals carrying the information concerning the strength of the connections between nerve cells can back-propagate from the output of a neuron to its input.

"I think this work represents an important bridge linking the experimental neurobiologist with the neural network community," said Poo.

Nerve cells communicate with each other via electrical signals initiated by the ebb and flow of sodium and potassium ions through channels in nerve membranes. This activity, likened to a battery, sparks an excitatory wave of electricity called an action potential. This wave rumbles along a cablelike axon like the lit portion of a fuse until it settles at the tip of this long branch. The action potential then triggers the release of a neurotransmitter which crosses a narrow cleft called the synapse to a neighboring nerve cell. The message--a chemical delivered on the back of an electrical impulse--is thereby delivered.

The efficiency with which nerve signals can jump across the synapses can by modified with use. "Long-term potentiation" or (LTP) results when repeated stimulation strengthens the synaptic connection. Different patterns of stimulation can lead to the opposite effect called "long-term depression" (LTD), where the connection is weakened. The strengthening and weakening of these connections is believed to be directly related to learning and memory.

In their study, the UCSD researchers followed the path of an electrical stimulus in a simple network of three neurons, removed from the hippocampus, a region of the brain likened to a switchboard that processes memories.

Following excitation by an electrical stimulus, signals were recorded by tiny electrodes attached each of the neurons as they communicated with each other. The study showed that when long-term depression was stimulated at one output synapse (at the tip of an axon), the effect was propagated back to the input synapse (on the dendrite).

"We purposely called this phenomenon back propagation, since we believe this could provide at least part of the mechanism for the implementation of the neural net 'back-prop' algorithm in the real brain," said Poo.

The researchers not only found that the signal could be propagated backward, it could be spread laterally to other synapses made both onto and by that neuron, but there was no forward propagation.

"This means that whenever you produce long-term changes, or memory imprints, by activating a set of synapses in the neural network, that imprint not only is made at activated synapses, it also spreads selectively to other synapses within the network," said Poo. "Activity in each neuron can actually produce long-term impacts on many other neurons in the neural network."

# # #

MEDIA CONTACT
Register for reporter access to contact details