Human Computer Interaction
ERCIM News No.46, July 2001 [contents]

Human Computer Interaction and Virtual Worlds

by Jutta Becker, Ivo Haulsen and Herbert Rüsseler


The development of scalable Human Computer Interfaces, and the definition of adaptive interaction paradigms, have led to both the usage of standard input devices for 3D interaction purposes and the integration and configuration of innovative interaction devices. The ViSTA team of GMD-First is currently engaged in some HCI related projects for exploring adaptive interaction paradigms. Standard input devices with haptic feedback and sound are used within the scope of VR learning and training environments, and novel multi-touch-sensitive input devices are being developed to improve tomorrow’s human/computer interaction with Virtual Worlds. These projects are funded by the Federal Ministry of Education and Research.

Virtual Worlds gain their acceptance through a large number of interaction possibilities. The development of web standards like VRML, X3D and Java represented an important step towards scalable virtual environments. Exploring virtual worlds within immersive environments, such as CAVE systems, demands new interaction paradigms and device technology with respect to common desktop standards.

In general, a vocational learning environment will be centred around a PC or laptop computer. Keyboard, mouse or touchpad are used, and occasionally additional devices like joysticks or trackballs are available. According to a majority of different learning contexts, interaction methods must be adapted to the actual user and his/her learning requirements. Different learning scenarios provide the user with interactive simulations or predefined animations. Besides navigation through and exploration of objects and environments, a variety of manipulation possibilities assist in the training.

Standard input devices and multi-channel interaction paradigms improve HCI with virtual worlds. components.

In common HCIs for VR systems, navigation and manipulation interaction methods are predefined and cannot be changed without new compilation. The scalable interaction model allows the configuration of input devices by reading an XML configuration document, and defining the input channels and their transformation, without new compilation. Mouse movements can be used alternately for translation, rotation or other parameter variations. Additional buttons or wheels are programmable without developing new functions. According to the user’s role and classification (eg expert/beginner, technician/manager), certain interaction and configuration possibilities are either hidden or made available.

We integrate additional spatial audio and tactile feedback into the training environments. For instance, the collision of objects during manipulation is combined with audio and force feedback. The new IFeel-Technology™, integrated in standard mouse devices, is available for every PC and laptop computer.

In other project areas, we are developing, with our partners FHG Institute IAP, pressure-sensitive devices based on polymer materials, advanced electronics and intelligent interaction paradigms. As well as the exploration of organic materials for future low-cost multisensorial devices, we will also be examining emotion and gesture processing.

Links:
http://www.first.gmd.de/vista
http://www.first.gmd.de/vista/hci

Please contact:
Jutta Becker — GMD
Tel: +49 30 6392 1776
E-mail: bonjour@first.gmd.de

Ivo Haulsen — GMD
Tel: +49 30 6392 1777
E-mail: ivo@first.gmd.de