Autonomous robot cameraman - Observation pose optimization for a mobile service robot in indoor living space

This paper presents a model based system for a mobile robot to find an optimal pose for the observation of a person in indoor living environments. We define the observation pose as a combination of the camera position and view direction as well as further parameters like the aperture angle. The optimal placement of a camera is not trivial because of the high dynamic range of the scenes near windows or other bright light sources, which often results in poor image quality due to glare or hard shadows. The proposed method tries to minimize these negative effects by determining an optimal camera pose based on two major models: A spatial free space model and a representation of the lighting. In particular, a task-dependent optimization takes into account the intended purpose of the camera images, e.g. different inputs are needed for video communication with other people or for an image-processing based passive observation of the person's activities. To prove the validity of our approach, we present first experimental results comparing the chosen observation pose and resulting image with and without respect to lighting in different observation tasks.

[1]  Nikolaos Papanikolopoulos,et al.  Mobile camera positioning to optimize the observability of human activity recognition tasks , 2005, 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[2]  H. Groß,et al.  MULTI-SENSOR MONTE-CARLO-LOCALIZATION COMBINING OMNI-VISION AND SONAR RANGE SENSORS , 2005 .

[3]  José Neves,et al.  The fully informed particle swarm: simpler, maybe better , 2004, IEEE Transactions on Evolutionary Computation.

[4]  Ki-Sang Hong,et al.  Extending dynamic range of two color images under different exposures , 2004, ICPR 2004.

[5]  Yuhui Shi,et al.  Particle swarm optimization: developments, applications and resources , 2001, Proceedings of the 2001 Congress on Evolutionary Computation (IEEE Cat. No.01TH8546).

[6]  Marc Ellenrieder Optimal Viewpoint Selection for Industrial Machine Vision and Inspection of Flexible Objects , 2005 .

[7]  Horst-Michael Groß,et al.  A real-time facial expression recognition system based on Active Appearance Models using gray images and edge images , 2008, 2008 8th IEEE International Conference on Automatic Face & Gesture Recognition.

[8]  Erik Schaffernicht,et al.  Are You Still Following Me? , 2007, EMCR.

[9]  Casimer M. DeCusatis,et al.  Handbook of applied photometry , 1997 .

[10]  Horst-Michael Groß,et al.  Memory-Efficient Gridmaps in Rao-Blackwellized Particle Filters for SLAM using Sonar Range Sensors , 2007, EMCR.

[11]  Éric Marchand,et al.  Control Camera and Light Source Positions using Image Gradient Information , 2007, Proceedings 2007 IEEE International Conference on Robotics and Automation.

[12]  Horst-Michael Groß,et al.  ShopBot: Progress in developing an interactive mobile shopping assistant for everyday use , 2008, 2008 IEEE International Conference on Systems, Man and Cybernetics.

[13]  Masatoshi Okutomi,et al.  Reconstruction of a High Dynamic Range and High Resolution Image from a Multisampled Image Sequence , 2007, 14th International Conference on Image Analysis and Processing (ICIAP 2007).

[14]  Content based active video data acquisition via automated cameramen , 1998, Proceedings 1998 International Conference on Image Processing. ICIP98 (Cat. No.98CB36269).

[15]  Andrew M. Wallace,et al.  Model-based planning of optimal sensor placements for inspection , 1997, IEEE Trans. Robotics Autom..