by Pavel Andris, José P. Costeira, Karol Dobrovodsky, Michal Haindl, Josef Kittler, Peter Kurdel, José Santos-Victor and Andrew J. Stoddart
VIRTUOUS (Autonomous Acquisition of Virtual Reality Models from Real World Scenes) is a joint research project between the University of Surrey, Guildford, United Kingdom, Instituto Superior Tecnico, Lisboa, Portugal, Institute of Information Theory and Automation, Prague, Czech Republic, and the Institute of Control Theory and Robotics, Bratislava, Slovakia financed by the Commission of the European Communities in frame of the INCO-COPERNICUS scheme.
The objective of this 3-year project, now in its first year, is to capture virtual reality models of real world robot cell scenes automatically, without interaction with a human observer and then to validate these models in a Virtual Reality Robot Arm Trainer application. To get a lifelike simulation of the manufacturing process, it is necessary to put 3D graphic information about all objects located in the robot workcell into the trainer. This is tedious and error-prone work as the scene complexity increases and automation may substantially reduce the effect of the simulation.
The goal of the project is to substantially reduce the amount of this work by using an autonomous acquisition system. Acquired data will be used to build a lifelike virtual reality model of the robot workcell and all related objects. The model will be processed by a scene properties extractor and used by the trainer.
Range and vision sensors are used to capture virtual reality models of a robot cell, its manipulation objects and environment. Within this project we examine two sensor systems. The first one uses multiple views range images from a structured light range sensor together with a colour camera affixed to a robot arm while the second mobile platform produce colour video sequences. The video sensor is a more ambitious sensor configuration, it has lower cost but needs more powerful software.
Different data sources have to be mutually registered and segmented into meaningful scene objects. To make virtual worlds realistic detailed scene models must be built. Satisfactory models require not only complex 3D shapes accorded with the captured scene, but also lifelike colour and texture. Textures provide useful cues to a subject navigating in such a VR environment, and they also aid in the accurate detailed reconstruction of the environment. Virtual textures are synthesized from underlying random fields-based models identified in the segmentation analysis step and subsequently they are mapped to corresponding virtual surfaces. Synthetic colour textures reduce the space and time overheads of texture in VR systems and thus they are essential in distributed VR applications.
The trainer software consists of a visualization and robot arm control software. In order to validate the performance of the trainer the generated trajectories can be downloaded to a PUMA 560 industrial robot. The trainer hardware includes a PC family computer running an already developed real-time robot control software. The PC computer is connected to a Silicon Graphics workstation that is used as a scene viewer. The captured models are used to develop and test user programs for the cell without using the cell itself. Finally, the user programs are downloaded into the cell and verified.
Producing detailed models is of generic interest to a number of different fields. For this reason the project results will be in the form of plug in modules for a commercial graphical system so that they can be used for a variety of other virtual reality applications in entertainment, medicine and manufacturing. Further information can be found in the project web site http://www.ee.surrey.ac.uk/ Research/VSSP/virtuous/virtuous.html.
Michal Haindl - CRCIM
Tel: +420 2 66052350