Main Menu

Research & Integration


WP2: Evaluation, Integration and Standards
WP3: Visual Content Indexing
WP4: Content Description for Audio, Speech and Text
WP5: Multimodal Processing and Interaction
WP6: Machine Learning and Computation Applied to Multimedia
WP7: Dissemination towards Industry

Logo IST
Home arrow News arrow Previous news arrow Evaluation showcase
Evaluation showcase

Evaluation showcase

Leader: Nicu Sebe, UvA and Andreas Rauber, TU Vienna-IFS

  • TU Vienna–IFS (Andreas Rauber, Thomas Lidy, Jakob Frank)
  • UvA (Nicu Sebe, Cees Snoek)
  • IRIT (Julien Pinquier, Thomas Foures, Philippe Joly)
  • AIIA – AUTH (Costas Kotropoulos, Emmanouil Benetos)
  • University of Surrey (Bill Christmas)
  • TAU (Arie Yeredor)
  • TU Vienna-PRIP (Allan Hanbury, Julian Stöttinger, Branislav Micusik)
  • INRIA-IMEDIA (Alexis Joly, Nozha Boujemaa)


The primary goal of this showcase is a demonstration of the wide range of semantic analysis and annotation capabilities developed within MUSCLE. A secondary goal is the promotion of objective evaluation of these capabilities.

Summary of the Activities and Results

Details of the Activities and Results

Live Retrieval Evaluation Events

We organised three live retrieval events at the CIVR 2007 in Amsterdam. Each event had the following general structure (for the details specific to each event, see the webpages above). Image or video databases were made available to participants beforehand so that they can be loaded into their retrieval systems. At the CIVR, groups brought their retrieval systems and set them up. A set of queries were then handed out. The authors of the retrieval systems were given the task of finding the target images/videos for the queries on the various search engines. We aimed for events that go beyond the regular demo session: it should be fun to do for the participants and fun to watch for the conference audience. A video of the VideOlympics event is available.

Evaluations on the Video Database

The goal of this component was to bring together and demonstrate the wide range of semantic analysis and annotation capabilities that are present within MUSCLE. Based on contributions from the participants, a video data base was compiled from TV short recordings of different genres (e.g. news reports, music clips, commercials, etc.). These were integrated and shared by all team members as well as provided in the evaluation web portal.
Showcase participants performed whatever semantic extraction and analysis (single or multimodal) they could apply to the videos, such as all kinds of low-level feature extraction, keyframe extraction, music genre analysis, music segment clustering, etc. To achieve this, partners were allowed to use all kind of algorithms, additional external information, as well as additional data they may have and use within their own labs to enhance the information extracted from the video.

Segmentation Evaluation Web Portal

The web portal is particularly meant to evaluate results of temporal segmentation tools. The web portal provides all the necessary environment resources (e.g. free data sets and annotations) and enables evaluations of state-of-the-art methods outside the constrained timelines of scientific evaluation campaigns. The goal here is to propose an online evaluation tool to the research community, which objectively measure various temporal segmentation results “on demand”, and which indirectly promote the best technology.