Emergence of Representations through Interactions of a Robot with the Real World
by Nicolas P. Rougier and Frédéric Alexandre
Research by the Cortex Research Team at INRIA Lorraine is focusing on the design of biologically inspired models with numerical and robust mechanisms, which can promote the emergence of multimodal representations within the framework of real-world robotics.
We have been studying the various kinds of information representation that may be useful to an autonomous robot seen as a cognitive system. These studies integrate data from the neurosciences into computational models, and in this framework, we have demonstrated how procedural, declarative and working memories can emerge from specific neuronal structures, and can become critical at different levels of information processing.
For instance, in a simple obstacle avoidance system, a neural network model may only learn the required sensor-motor coordination between sensors and actuators. This is called procedural memory, and is one of the roles of the associative cortex. In a more complex task requiring an autonomous system to identify certain locations in an environment (eg because of obstacles), the model needs to memorise descriptions (even very basic ones) to be able to reach these locations. This is called declarative memory, and is one of the roles of the hippocampus. In an even more complex task, a robot may need to perform a sequence of tasks in which order is critical, and therefore needs a system of representation that allows it to recall which tasks have already been performed, and which remain to be done. This is called working memory, and is one of the roles of the prefrontal cortex.
|The Koala robot and its pan/tilt camera is used to test the robustness of the models in real world conditions. © INRIA / Photo A.Eidelman.
We have been exploring these three kinds of representations using computational models and have demonstrated the basic principles for each of them. Nonetheless, our studies of these systems at the global level (the level of the whole model) have, for several reasons, also underlined the need to re-think the computational mechanisms implied at the unit level. On the one hand, neuronal models with such specific memorisation abilities need to collaborate and exchange information during real robotic tasks requiring the coordination of these properties. On the other hand, using these models with real robots in the real world requires the design of very robust numerical models.
As a consequence, we are currently working on the design of robust numerical mechanisms that can promote the emergence of these different kinds of information representation. More precisely, we have been studying a model of the cerebral cortex (Burnod, 1989) that emphasises the role of units, which can be described as assemblies of neurons, allowing them to process information at a higher level than the classical neural unit. We have demonstrated in the past that networks of such units are greatly advantageous for information processing, since they allow the generalisation of knowledge from a small set of experiences. Furthermore, we are now focusing on numerical mechanisms à la CNFT. These deal with a restrained connectivity between units, which is quite critical in enabling maps to interact with each other (eg visuo-motor maps, visuo-auditory maps, polymodal maps).
In the framework of a French CNRS Robea project, we are first developing biologically inspired and robust visual and motor representations together with multimodal integration, which can endow a robot with the ability to predict the consequences of its actions on the real world (an arm with nine degrees of freedom).
In the framework of the FET European project MirrorBot, we are interested in more abstract representational learning. We have defined some complex behavioural protocols for our robot (object localisation, recognition and reaching, action planning and imitation, speech recognition and production) and we are observing how these memorisation mechanisms interact with each other to allow multimodal learning. We are particularly interested in a better understanding of the role of mirror neurons, originally observed by G. Rizzolati and his laboratory, a member of this project. These neurons have been found to be activated in a monkey's prefrontal cortex by the observation of an action performed by the monkey itself, by another monkey or even by a human. This cortical region corresponds to a region devoted to language in the human brain, and is thought to be the basis for complex abstract representations of action.
Through this ongoing research into multimodal integration and autonomous robot navigation, we aim to show that useful representations for world understanding and language acquisition can emerge from interactions with the world, and that building such representations can be performed by robust numerical mechanisms inspired by neuroscience.
Nicolas P. Rougier, LORIA-INRIA
Tel: +33 3 83 59 30 92