Evaluation showcase |
Evaluation showcase
Leader: Nicu Sebe, UvA and Andreas Rauber, TU Vienna-IFS
IntroductionThe primary goal of this showcase is a demonstration of the wide range of semantic analysis and annotation capabilities developed within MUSCLE. A secondary goal is the promotion of objective evaluation of these capabilities. Summary of the Activities and Results
Details of the Activities and ResultsLive Retrieval Evaluation EventsWe organised three live retrieval events at the CIVR 2007 in Amsterdam. Each event had the following general structure (for the details specific to each event, see the webpages above). Image or video databases were made available to participants beforehand so that they can be loaded into their retrieval systems. At the CIVR, groups brought their retrieval systems and set them up. A set of queries were then handed out. The authors of the retrieval systems were given the task of finding the target images/videos for the queries on the various search engines. We aimed for events that go beyond the regular demo session: it should be fun to do for the participants and fun to watch for the conference audience. A video of the VideOlympics event is available. Evaluations on the Video DatabaseThe goal of this component was to bring together and demonstrate the wide range of semantic analysis and annotation capabilities that are present within MUSCLE. Based on contributions from the participants, a video data base was compiled from TV short recordings of different genres (e.g. news reports, music clips, commercials, etc.). These were integrated and shared by all team members as well as provided in the evaluation web portal. Showcase participants performed whatever semantic extraction and analysis (single or multimodal) they could apply to the videos, such as all kinds of low-level feature extraction, keyframe extraction, music genre analysis, music segment clustering, etc. To achieve this, partners were allowed to use all kind of algorithms, additional external information, as well as additional data they may have and use within their own labs to enhance the information extracted from the video. Segmentation Evaluation Web PortalThe web portal is particularly meant to evaluate results of temporal segmentation tools. The web portal provides all the necessary environment resources (e.g. free data sets and annotations) and enables evaluations of state-of-the-art methods outside the constrained timelines of scientific evaluation campaigns. The goal here is to propose an online evaluation tool to the research community, which objectively measure various temporal segmentation results “on demand”, and which indirectly promote the best technology. |