ERCIM News No.43 - October 2000 [contents]

Representing Structured Symbolic Data with Self-organizing Maps

by Igor Farkas

Artificial self-organizing neural networks - especially self-organizing maps (SOMs) - have been studied for several years at the Institute of Measurement Science of the Slovak Academy of Sciences in Bratislava. The SOM research oriented to structured symbolic data is the most recent and is being supported by the Slovak Grant Agency for Science (VEGA) within a project of the Neural Network Laboratory. The goal of this research is to assess the potential of SOM to represent structured symbolic data, which is associated with various high-level cognitive tasks.

Various information processing tasks can be successfully handled by neural network models. Predominantly, neural nets have proven to be useful in tasks which deal with real-valued data, such as pattern recognition, classification, feature extraction or signal filtering. Neural networks have been criticised as being unsuitable for tasks involving symbolic and/or structured data, as occur in some cognitive tasks (language processing, reasoning etc.). These tasks were previously tackled almost exclusively by classical, symbolic artificial-intelligence methods. However, facing this criticism, during the last decade a number of neural architectures and algorithms were designed to demonstrate that neural networks also possess the potential to process symbolic structured data. The best known examples of such models include recursive auto-associative memory (RAAM), tensor product based algorithms or models that incorporate synchrony of neurons’ activation. For example, the RAAM, being a three-layer feed-forward neural network, can be trained to encode various data structures presented to the network (such as ‘A(AB)’, where each of the two symbols is encoded as a binary vector). These encoding (representations) are gradually formed in the layer of hidden units during training. However, this is often a time-consuming process (moving target problem). Despite this drawback, RAAM has become one of the standard neural approaches to represent structures.

For the learning method to work, the representations of structures generated must be structured themselves to enable structure-sensitive operations. In other words, the emerged representations must be systematic (compositional), requiring (in some sense) that similar structures should yield similar encoding.


We focused on an alternative model that incorporates a neural network having the capability to generate structured representations. As opposed to RAAM, the main advantage of our model consists in fast training, because it is based on a rather different concept, employing unsupervised learning (self-organisation). The core of the model are the self-organising maps (SOMs) which are well-known as a standard neural network learning algorithm used for clustering and visualisation of high-dimensional data patterns. The visualisation capability of the SOM is maintained by its unique feature – topographic (non-linear) transformation from input space to output units. The latter are arranged in a regular grid – which implies that data patterns originally close to each other are mapped onto nearby units in the grid. We exploited this topographic property analogously in representing sequences.


In our approach, each symbolic structure is first decomposed onto a hierarchy of sequences (by some external parsing module) which are then mapped onto a hierarchy of SOMs. For instance, a structure ‘A(AB)’ which can be viewed as a tree of depth two, is decomposed into two sequences ‘Ao’ and ‘AB’, the latter belonging to the first level in the hierarchy and the former to the second higher level. In this notation, symbol ‘o’ is simply treated as another symbol and represents a subtree. In general, structures with maximum depth ‘n’ require ‘n’ SOMs to be used and stacked in a hierarchy. Hence, the representation of a structure consists in simultaneous representations of associated sequences emerging after decomposition, distributed across the SOMs at all levels.

How is a sequence represented in the SOM? The idea comes from Barnsley’s iterated function systems (IFS) which can be used for encoding (and decoding) symbolic sequences as points in a unit hypercube. In the hypercube, every symbol is associated with a particular vertex (the dimension of the hypercube must be sufficiently large to contain enough vertices for all symbols in alphabet). During the encoding process (going through the sequence symbol by symbol), a simple linear transformation is recursively applied resulting in ‘jumps’ within the hypercube, until the final point of this series (when the end of sequence is reached) becomes an IFS-representing point of the sequence. As a result, we get as many points as there are sequences to be encoded. The important feature of these IFS-based sequence representations is the temporal analogue of the topographic property of a SOM. This means that the more similar two sequences are in terms of their suffices, the more closely are their representations placed in the hypercube. The encoding scheme described has been incorporated in training the SOM. The number of units employed has to be selected according to the length of sequences being used: the longer the sequences, the finer the resolution needed and the more units are required. The training resulted in a topographically ordered SOM, in which a single sequence can afterwards be represented by the winning unit in the map. If we do not use winner co-ordinates as output (as commonly done), but take the global activity of all units in the map (as a Gaussian-like profile, with its peak centred at the winner’s position in the grid), then we can simultaneously represent multiple sequences in one map.

We have applied the above scheme on simple two- and four-symbol structures of depth less than four, and the preliminary results (evaluted using a hierarchical cluster diagram of representations obtained) show that the generated representations display the property of systematic order. However, to validate this approach, we shall need to test how it scales with larger symbol sets as well as with more complex structures (both in terms of ‘height’ and ‘width’).

Please contact:
Igor Farkaso - Institute of Measurement Science, Slovak Academy of Sciences
Tel: +421 7 5477 5938
E-mail: farkas@neuro.savba.sk