ERCIM News No.31 - October 1997

Compositing Computer and Video Image Sequences

by Marie-Odile Berger and Jean-Claude Paul

Augmented reality systems aim at enhancing the users's vision with computer generated imagery. For numerous applications, it is necessary to merge a video signal from a real camera with computer generated objects. Such techniques are of great interest to assess the potential impact of new constructions on local environments (for urban planning decisions, for instance). To make such systems and their applications effective, the computer generated objects (also called virtual objects) must be blended convincingly with the real images. In this context, the ISA project at INRIA Lorraine develops methods and tools devoted to the automatic composition of video and computer images.

One major challenge in image composition is to correctly register the real data and the virtual data in order to ensure the geometric coherence of the composed scene along a complete video sequence. This can be done using two basic approaches. The first solution is to use position sensors (for example Polhemus sensors); but instrumenting the real world is not always possible, especially for vast or outdoor environments. The other solution is to use some objects in the scene, the model of which is known, to perform registration. As we often work on architectural outdoor applications, instrumentation is impossible. Hence we are working on temporal model based registration, since the 3D structure of some objects (buildings) in the scene is often known.

Figure 1: The model of the bridge.

Figure 2: The reprojection of the model of the bridge using the computed pose.

Ensuring temporal registration is not sufficient to perform realistic composition. Other significant visual clues to the human perceptual system must be considered: for instance, proper occlusion resolution between real and virtual objects is highly desirable in composition systems. Other photometric interactions between real and virtual objects (continuity of lighting, shadowing) should also be considered.

Unlike numerous existing composition systems, which often require strong and tedious interactions with the user, the aim of the ISA project is to develop tools and algorithms allowing composition to be performed as automatically as possible.

We have first designed a robust algorithm for temporal model based registration. This system detects and tracks various features corresponding to the model, that will be used to compute the camera position and to update the set of visible features. The heart of our system is the pose computation method, which handles various features (points, lines, curves) in a very robust statistical way. This method is then able to give a correct estimate even if tracking errors occur. The reliability of our method has been proven in the project of the illumination of the Paris Bridges: the aim was to test new lighting systems for some bridges around the "Ile de la cite". In a sequence shot at dusk time, we have replaced the bridge with the lighting simulation of the bridge: Figure 1 shows the model of the bridge and Figure 2 exhibits the reprojection of the model using the computed pose. The interested reader may see further results on our web site:

It must be noticed that our algorithm does not work in real time. This was not a priority for us, because of the cost of the rendering process. However, the tracking and the pose computation can be processed in parallel. Thus, real time applications could be carried out, provided that the computation of the virtual objects is not too expensive.

We are currently working on the problem of occlusions. Theoretically, resolving occlusions could be achieved by inferring a dense map from two consecutive images, which allows the depths of the virtual and the real object to be compared. Unfortunately, despite new advances in 3D reconstruction, the depth map lacks accuracy and cannot be used as is. We therefore investigate a contour based approach without reconstruction. For more information on the ISA project, see:

Please contact:
Marie-Odile Berger and Jean-Claude Paul - INRIA Lorraine
Tel: +33 3 8359 2070, +33 3 8359 2077

return to the contents page