Main Menu

Research & Integration


WP2: Evaluation, Integration and Standards
WP3: Visual Content Indexing
WP4: Content Description for Audio, Speech and Text
WP5: Multimodal Processing and Interaction
WP6: Machine Learning and Computation Applied to Multimedia
WP7: Dissemination towards Industry

Logo IST
Home arrow Events arrow Past events arrow Dynamic Texture...
Dynamic Texture...

Dynamic Texture  Detection in Video

Leaders: Enis Cetin, Bilkent University and Dimitry Chetverikov, MTA SZTAKI

  • Bilkent University (B. Ugur Toreyin, Y. Dedeoglu, A.Enis Cetin (project leader))
  • SZTAKI (S. Fazekas, D. Chetverikov)
  • Tel Aviv University (T. Amiaz, N. Kyriati)


Summary: Dynamic textures are common in natural scenes. Examples of dynamic textures in video include fire,smoke,cloude,trees in the wind, sky, sea and ocean waves etc. In this showcase, (i) we develop real-time dynamic textures detection methids in video and (ii) present solutions to video object classification based on motion information.

Objective of the Showcase Project: Researchers extensively studied 2-D textures and related problems in the field of image processing. On the other hand, there is very little research on dynamic texture detection in video. It is well known that tree leaves in the wind, moving clouds etc. cause major problems in outdoor video motion detection systems. If one can initially identify bushes, trees, and clouds in a video, then such regions can be excluded from the search space or proper care can be taken in such regions, and this leads to robust moving object detection and identification systems in outdoor video. One can take advantage of the research in 2-D textures to model the spatial behaviour of a given dynamic texture. To be able to detect and segment dynamic textures in challenging real world applications, differences in dynamics must be also analyzed. Two different approaches will be studied in this showcase. In the first approach, dynamic textures are classified as weak dynamic textures will be analysed with standard optical flow algorithms relying on the brightness constancy assumption. However, self-occlusion, material diffusion, and other physical processes not obeying the brightness constancy assumption make such algorithms inappropriate for strong dynamic textures. An alternative to the brightness constancy assumption, the brightness conservation assumption enables the brightness of an image point to propagate to its neighborhood and thus to model complex brightness changes. A non-regular optical flow calculation based on the brightness conservation assumption provides a better model for strong dynamic textures.

In the second approach, prior information about dynamic textures is used for detecting smoke and flames in video. It is experimentally observed that flame flicker process is not a narrow-band activity but it is wide-band activity covering 2 to 15 Hz. Zero-crossings of wavelet coefficients covering the band of 2 to 15 Hz is an effective feature and Hidden Markov Models (HMM) can be trained to detect temporal characteristics of fire using the wavelet domain data. Similarly, temporal behaviour of tree leaves in the wind or cloud motions will be investigated to achieve robust video understanding systems including content based video retrieval systems.

Demo and Web Sites:

  • Sample fire and smoke detection videos and the software
  • Dynamic texture segmentation results
  • DynTex video database

Proposed fire and smoke detection method is also explained at CVonline, Compendium of Computer Vision web-page by Prof. R. B. Fisher, University of Edinburgh