CONTEXT IN ROBOTIC VISION

Nowadays, the computer vision community conducts an effort to produce canny systems able to tackle unconstrained environments. However, the information contained in images is so massive that fast and reliable knowledge extraction is impossible without restricting the range of expected meaningful signals. Inserting a priori knowledge on the operative "context" and adding expectations on object appearances are recognized today as a feasible solution to the problem. This paper attempts to define "context" in robotic vision by introducing a summarizing formalization of previous contributions by multiple authors. Starting from this formalization, we analyze one possible solution to introduce context-dependency in vision: an opportunistic switching strategy that selects the best fitted scenario among a set of pre-compiled configurations. We provide a theoretical framework for "context switching" named Context Commutation, grounded on Bayesian theory. Finally, we describe a sample application of the above ideas to improve video surveillance systems based on background subtraction methods.

[1]  James L. Crowley,et al.  Perceptual Components for Context Aware Computing , 2002, UbiComp.

[2]  Christophe Coutelle Conception d'un système à base d'opérateurs de vision rapides , 1995 .

[3]  Bruce A. Draper,et al.  ADORE: Adaptive Object Recognition , 1999, ICVS.

[4]  Xavier Merlo Techniques probabilistes d'intégration et de contrôle de la perception en vue de son exploitation par le système de décision d'un robot , 1988 .

[5]  Andreas Zell,et al.  Evolving Task Specific Image Operator , 1999, EvoWorkshops.

[6]  Lawrence R. Rabiner,et al.  A tutorial on hidden Markov models and selected applications in speech recognition , 1989, Proc. IEEE.

[7]  Azriel Rosenfeld,et al.  Scene Labeling by Relaxation Operations , 1976, IEEE Transactions on Systems, Man, and Cybernetics.

[8]  Christopher M. Brown,et al.  Control of selective perception using bayes nets and decision theory , 1994, International Journal of Computer Vision.

[9]  Paolo Lombardi A model of adaptive vision system: application to pedestrian detection by an autonomous vehicle , 2005, Intelligenza Artificiale.

[10]  Rama Chellappa,et al.  Knowledge-based integration of IU algorithms , 1996, Proceedings of 13th International Conference on Pattern Recognition.

[11]  Bernt Schiele,et al.  Context-driven model switching for visual tracking , 2001, Robotics Auton. Syst..

[12]  Sebastiano B. Serpico,et al.  Classifier fusion for multisensor image recognition , 2001, SPIE Remote Sensing.

[13]  David Suter,et al.  Contour tracking with automatic motion model switching , 2003, Pattern Recognit..

[14]  W. Eric L. Grimson,et al.  Adaptive background mixture models for real-time tracking , 1999, Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149).

[15]  Jiri Matas,et al.  Image interpretation: exploiting multiple cues , 1995 .

[16]  Antonio Torralba,et al.  Context-based vision system for place and object recognition , 2003, Proceedings Ninth IEEE International Conference on Computer Vision.

[17]  Eric Horvitz,et al.  Bayesian Modality Fusion: Probabilistic Integration of Multiple Vision Algorithms for Head Tracking , 1999 .