Main Menu

Home
Coordination
Research & Integration
Dissemination
Community
News
Events
Links

Workpackages

WP2: Evaluation, Integration and Standards
WP3: Visual Content Indexing
WP4: Content Description for Audio, Speech and Text
WP5: Multimodal Processing and Interaction
WP6: Machine Learning and Computation Applied to Multimedia
WP7: Dissemination towards Industry


Logo IST
Home arrow Dissemination arrow Showcases
Showcases
 
The following examples are illustrative and meant to give a flavour of the research that is being conducted within the Network.


Detecting Falling People

Software developed by Enis Cetin's group at Bilkent University is capable of telling the difference between a person sitting down and one actually falling. This technology might prove very useful for intelligent surveillance of vulnerable groups such as patients, small children or todlers, and elderly persons. 

Read more...
 
Eyetracking

improves image retrieval

Recent experiments have shown that visual target selection is significantly faster with an eyetracker than with a mouse. This mechanism has now been developed into an interface for image retrieval in which the user is able to search a database of images for a target image just using eye movement. A network of pre-computed inter-image similarities are used to provide successive displays of thumbnail images that are similar to the images that attracted attention in previous displays. The VIDEO shows the fixations and saccades during a typical search. The target image is in the top left corner and the action stops as soon as the target image is retrieved.

Read more...
 
An Online Learning Framework

for Object Detection, Tracking and Recognition.

ICG (TU Graz) have developed a novel framework for object detection, tracking and recognition. Unlike other approaches we are interested in a seamless integration of these modules. Detection and tracking is done with our online AdaBoost algorithm, based on Haar wavelets and local orientation histograms, while recognition is done with our fast approximated SIFT descriptor. We use a common over-complete representation which is shared by the different modules. By means of the integral data structure the features are real-time computable enabling a real-time implementation of all three modules. The common feature pool enables the integration of detection, tracking and recognition not only on the module level, but also on the feature level which in turn facilitates the robustness and opens new venues of module interaction.

Read more...
 
Human Detection

  in Difficult Scenarios by Combining Motion and Appearance 

Reliable human detection is a key issue in automated visual surveillance systems. Motion detection provides a strong cue towards accomplishing this task. However, typical scenarios usually contain motion clutter leading to false alarms. MUSCLE researchers at ACV developed a real-time framework which combines a model-based human detection approach relying on motion detection and statistical learning in order to validate detected objects and remove spurious observations. The combined detection scheme shows video improvements in terms of lower false alarm rates and improved tracking performance for difficult scenarios containing moving shadows and vehicles.

 

Read more...
 
RETIN

An Interactive Content-Based Image Retrieval System

Content-Based Image Retrieval (CBIR) systems have attracted large amounts of research attention since 1990's. Contrary to the early systems, focused on full-automatic strategies, recent approaches introduce human-computer interaction into CBIR. RETIN is the on-line image search system developed in the ETIS lab (ENSEA, France). A web version of the software is available (beta version)!

Read more...
 
Map of Mozart

Mozart's complete works analyzed

Andi Rauber's team focuses on research for Music Information Retrieval and thus on methods which enable automatic recognition of style, genre or artist of a piece of music or retrieval of similar music. Moreover, the information extracted by these methods builds the basis for a novel way of access to music archives.

 

Read more...
 
PlaySOM

Innovative interfaces for music categorization and access 

By applying SOM neural nets to automatically analyzed audio content (frequency spectra), researchers at TUWien-IFS have created a very innovative way to organize music by sound similarity. After categorization, the content of an audio collections is graphically displayed (on the screen of a laptop or mobile device) as a topographic map where colour represents the density of a particular genre in the collection. The user can then select a playlist reflecting his or her mood by drawing a path through this audio landscape.

Read more...
 
<< Start < Prev 1 2 Next > End >>