Face tracking in meeting room scenarios using omnidirectional views

The robust localization and tracking of faces in video streams is a fundamental concern for many subsequent multi-modal recognition approaches. Especially in meeting scenarios several independent processing queues often exist that use the position and gaze of faces, such as group action- and face recognizers. The costs for multiple camera recordings of meeting scenarios are obviously higher compared to those of a single omnidirectional camera setup. Therefore it would be desirable to use these easier to acquire omnidirectional recordings. The present work presents art implementation of a robust particle filter based face-tracker using omnidirectional views. It is shown how omnidirectional images have to be unwarped before they can be processed by localization and tracking systems being invented for undistorted material. The performance of the system is evaluated on a part of the PETS-ICVS 2003 smart meeting room dataset.

[1]  Takeo Kanade,et al.  Neural Network-Based Face Detection , 1998, IEEE Trans. Pattern Anal. Mach. Intell..

[2]  Mika Laaksonen,et al.  Skin detection in video under changing illumination conditions , 2000, Proceedings 15th International Conference on Pattern Recognition. ICPR-2000.

[3]  Bernt Schiele,et al.  Skin Patch Detection in Real-World Images , 2002, DAGM-Symposium.

[4]  James M. Rehg,et al.  Statistical Color Models with Application to Skin Detection , 2004, International Journal of Computer Vision.

[5]  Michael Isard,et al.  CONDENSATION—Conditional Density Propagation for Visual Tracking , 1998, International Journal of Computer Vision.