JOINT ERCIM ACTIONS
ERCIM News No.38 - July 1999

AVOCADO - The Virtual Environment Framework

by Henrik Tramberend, Frank Hasenbrink, Gerhard Eckel
and Uli Lechner


AVOCADO is a software framework designed to allow the rapid development of virtual environment applications for immersive and non-immersive display setups like the CAVE (CAVE Automatic Virtual Environment), CyberStage, Responsive Workbench and Teleport. It supports the programmer in all tasks involved with these types of applications.

AVOCADO integrates a variety of different interface devices, is easily extensible and adaptable for new devices to be invented. It is highly interactive and responsive, supports a rapid prototyping style of application development, will enable the development of truly distributed applications, and is targeted at high-end Silicon Graphics workstations and aims to deliver the best performance these machines are capable of.

AVOCADO includes the following concepts:

Visual Data Processing

The visual data processing is organized in a pipeline and computed in parallel by Performer. This rendering pipeline consists of a set of optional units, for:

After the modeling hierarchy is updated to its actual state in the application process it is passed on to the culling process which strips all invisible objects. It is important to support this technique by dividing large geometry into smaller, cullable objects. The part of the scene left over after the culling is passed on to the drawing process where it is rendered to the screen with OpenGL. For configurations with more than one visual display system, the appropriate number of pipelines is used.

Auditory Rendering

Rendering the auditory scene has to take into account the position of the observer´s head in the virtual world and in the auditory display as well as the characteristics of the auditory display (ie the loudspeaker configuration). The auditory rendering process is a two stage process. In the first stage a source signal is synthesized and in the second stage it is spatialized. In the first stage only the sound model parameters are needed by the rendering process. In the second stage the signals driving the auditory display are computed as a function of the distance between observer and sound source, the radiation characteristics of the source and the signature of the acoustic environment.

With these signals the auditory display produces the illusion of a sound source emitted from a certain position in a certain acoustic environment shared by the observer and the source. The sound rendering is a dynamic process that takes into account movements of the observer in the display, movements in the virtual world, and movements of the sound source. If these movements are faster than about 30 km/h, the pitch changes due to Doppler shift are simulated as well.

Tactile Rendering

The CyberStage display has a set of low-frequency emitters built into its floor. This allows vibrations to be generated, which can be felt through the feet and legs. There are two main areas of application of this display component. First, low frequency sound (which cannot be localized) can be emitted to complement the loudspeaker projection. Second, specially synthesized low frequency signals can be used to convey attributes such as roughness or surface texture.
The vibration display is handled like sound in the rendering process. Sound models are used to generate the low-frequency signals. Sound synthesis techniques, generally referred to as granular synthesis, are very well suited to produce band-limited impulses that may represent surface features. Such features can be displayed through user interaction. For instance, a virtual pointing device can be used to slide or glide over a surface and produce vibrations. Additionally, higher-frequency sound can also be produced if necessary. Some of what can be felt usually through the skin of our fingers when sliding over an object is presented to our feet. This sensation can complement sound and vision dramatically.

Please contact:

Henrik Tramberend - GMD
Tel: +49 2241 14 2364
E-mail: henrik.tramberend@gmd.de

Frank Hasenbrink - GMD
Tel: +49 2241 14 2051
E-mail: frank.hasenbrink@gmd.de

Gerhard Eckel - GMD
Tel: +49 2241 14 2968
E-mail: gerhard.eckel@gmd.de

Uli Lechner - GMD
Tel: +49 2241 14 2984
E-mail: ulrich.lechner@gmd.de


return to the ERCIM News 38 contents page