Main Menu

Home
Coordination
Research & Integration
Dissemination
Community
News
Events
Links

Workpackages

WP2: Evaluation, Integration and Standards
WP3: Visual Content Indexing
WP4: Content Description for Audio, Speech and Text
WP5: Multimodal Processing and Interaction
WP6: Machine Learning and Computation Applied to Multimedia
WP7: Dissemination towards Industry


Logo IST
Home arrow Research & Integration arrow Overview arrow Augmented assembly using...
Augmented assembly using...

Augmented assembly using a multimodal interface

Leader: Sanni Siltanen, VTT
Partners:

  • VTT Technical Research Centre of Finland
  • Technical University of Crete

Description of the system:

In this showcase a multimodal user interface for augmented assembly application has been created and evaluated. The demo application that we use in this showcase project demonstrates how to augment for example the assembly of a motor or other complex product in an assembly line. In the current portable demonstration a simple 3D puzzle-box is used. It is a means to study how to implement an augmented assembly system to a real setting in a factory and explore the advantages and disadvantages of augmented reality techniques. With this demonstration system we can also test, demonstrate and evaluate different input modalities for augmented assembly setup.

The demonstration system consists of a PC, light-weight HMD (Head Mounted Display) that has a small monitor in front of the other eye, a camera and a mike attached to it. The assembler sees augmented instructions on the HMD and can perform complex assembly, without browsing the instruction manual.

The user can move front- and backwards in assembly phases using keyboard, speech or gesture commands. The use of the keyboard is perhaps the most undesired interaction way in the real assembly work. The more convenient modalities are speech and gesture user interfaces. We wanted to create as light weight system as possible. These modalities were implemented as they do not require any additional equipment (except for small mike). Thesystem architecture allows adding more modalities if required.

TUC made a speech recognition module for the system, VTT developed a simple gesture interface and integrated both new modalities to the application.

We had first running demonstration system with different user interface modalities at the end of the February 2007.

In March the system has been tested, further developed and a demo description for CIVR07 conference was submitted.


        Video of the Augmented Assembly using a multimodal interface showcase can be found here.