SPECIAL THEME: BIOCOMPUTING
ERCIM News No.43 - October 2000 [contents]

Neurobiology keeps Inspiring New Neural Network Models

by Lubica Benuskova


Biologically inspired recurrent neural networks are investigated at the Slovak Technical University in Bratislava. This project, supported by the Slovak Scientific Grant Agency VEGA, builds on the results of the recently accomplished Slovak-US project ‘Theory of neocortical plasticity’ in which theory and computer simulations were combined with neurobiological experiments in order to gain deeper insights into how real brain neurons learn.

The human brain contains several hundred thousand millions of specialised cells called neurons. Human and animal neurons share common properties, thus researchers often use animal brains to study specific questions about processing information in neural networks. The aim is to extrapolate these findings to ideas about how our own brains work. This understanding can be crucial not only for the development of medicine but also for computer science. Of course, the validity of extrapolating findings in animal studies depends on many aspects of the problem studied. We studied a certain category of plastic changes occuring in neurons when an animal was exposed to a novel sensory experience. We work with the evolutionarily youngest part of the brain, ie neocortex, which is involved mainly in the so-called cognitive functions (eg, perception, association, generalisation, learning, memory, etc).

Cortical neuron. Courtesy of Teng Wu, Vanderbilt University, USA.Neurons emit and process electric signals. They communicate these signals through specialized connections called synapses. Each neuron can receive information from as well as send it to thousands of synapses. Each piece of information is transmitted via a specific and well defined set of synapses. This set of synapses has its anatomical origin and target as well as specific properties of signal transmission. At present, it is widely accepted that origins and targets of connecting synapses are determined genetically as well as most of the properties of signal transmission. However, the efficacy of signal transfer at synapses can change throughout life as a consequence of learning.

In other words, whenever we learn something, somewhere in our neocortex, changes in the signal transfer functions of many synapses occur. These synaptic changes are then reflected as an increase or decrease in the neuron’s response to a given stimulus. All present theories of synaptic learning refer to the general rule introduced by the Canadian psychologist Donald Hebb in 1949. He postulated that repeated activation of one neuron by another, across a particular synapse, increases its strength. We can record changes in neurons’ responses and then make inferences about which synapses have changed their strengths and in which direction, whether up or down. For this inference, we need to introduce some reasonable theory, that is a set of assumptions and rules which can be put together into a model which simulates a given neural network and which, in simulation, reproduces the evolution of its activity. If the model works, it can give us deeper insights into what is going on in the real neural networks.

Objectives and Research Description

Experience-dependent neocortical plasticity refers to the modification of synaptic strengths produced by the use and disuse of neocortical synapses. We would like to contribute to an intense scientific effort to reveal the detailed rules which hold for modifications of synaptic connections during learning. Further, we want to investigate self-organising neural networks with time-delayed recurrent connections for processing time series of inputs.

Research Results

Experience-dependent neocortical plasticity was evoked in freely moving adult rats in the neocortical representation of their main tactile sense, i.e. whiskers. We developed a neural network model of the corresponding neuroanatomical circuitry. Based on computer simulations we have proposed which synapses are modified, how they are modified and why. For the simulation of learning, we used the theory of Bienenstock, Cooper and Munro (BCM). Originally, the BCM theory was introduced for the developing (immature) visual neocortex. They modelled experiments done on monkeys and cats. We have shown that the BCM rules apply also for the mature stage of the brain development, for a different part of the neocortex, and for the different animal species (rats). The main distinguishing feature of the BCM theory against other Hebbian theories of synaptic plasticity is that it postulates the existence of a shifting synaptic potentiation threshold, the value of which determines the sign of synaptic changes. The shifting threshold for synaptic potentiation is proportional to the average of a neuron’s activity over some recent past. Prof. Ebner’s team at Vanderbilt University in Nashville, TN, USA used an animal model of mental retardation (produced by exposure of the prenatal rat brain to ethanol) to show a certain specific impairment of experience-evoked neocortical plasticity. From our model, we have derived an explanation of this impaired plasticity in terms of an unattainably high potentiation threshold. Based on a comparison between computational results and experimental data, revealing a specific biochemical deficit in these faulty cortices, we have proposed that the value of the potentiation threshold depends also on a specific biochemical state of the neuron.

The properties of the self-organising BCM learning rule have inspired us to investigate the state space organisation of recurrent BCM networks which process time series of inputs. Activation patterns across recurrent units in recurrent neural networks (RNNs) can be thought of as sequences involving error back propagation. To perform the next-symbol prediction, RNNs tend to organise their state space so that ‘close’ recurrent activation vectors correspond to histories of symbols yielding similar next-symbol distributions. This leads to simple finite-context predictive models, built on top of recurrent activation, grouping close activation patterns via vector quantization. We have used the recurrent version of the BCM network with lateral inhibition to map histories of symbols into activation patterns of the recurrent layer. We compared the finite-context models built on top of BCM recurrent activation with those constructed on top of RNN recurrent activation vectors. As a test bed, we used complex symbolic sequences with rather deep memory structures. Surprisingly, the BCM-based model has a comparable or better performance than its RNN-based counterpart.

Please contact:
Lubica Benuskova - Slovak Technical University
Tel: +421 7 602 91 696
E-mail: benus@elf.stuba.sk,
http://www.dcs.elf.stuba.sk/~benus