Real-time localization and mapping with wearable active vision

We present a general method for real-time, vision-only single-camera simultaneous localisation and mapping (SLAM) - an algorithm which is applicable to the localisation of any camera moving through a scene - and study its application to the localisation of a wearable robot with active vision. Starting from very sparse initial scene knowledge, a map of natural point features spanning a section of a room is generated on-the-fly as the motion of the camera is simultaneously estimated in full 3D. Naturally this permits the annotation of the scene with rigidly-registered graphics, but further it permits automatic control of the robot's active camera: for instance, fixation on a particular object can be maintained during extended periods of arbitrary user motion, then shifted at will to another object which has potentially been out of the field of view. This kind of functionality is the key to the understanding or "management" of a workspace which the robot needs to have in order to assist its wearer usefully in tasks. We believe that the techniques and technology developed are of particular immediate value in scenarios of remote collaboration, where a remote expert is able to annotate, through the robot, the environment the wearer is working in.

[1]  Chris Harris,et al.  Geometry from visual motion , 1993 .

[2]  Carlo Tomasi,et al.  Good features to track , 1994, 1994 Proceedings of IEEE Conference on Computer Vision and Pattern Recognition.

[3]  Paul A. Beardsley,et al.  Active visual navigation using non-metric structure , 1995, Proceedings of IEEE International Conference on Computer Vision.

[4]  Stefano Soatto,et al.  Optimal Structure from Motion: Local Ambiguities and Global Estimates , 2004, International Journal of Computer Vision.

[5]  Nicholas Ayache,et al.  Artificial vision for mobile robots - stereo vision and multisensory perception , 1991 .

[6]  Stefano Soatto,et al.  MFM": 3-D motion from 2-D motion causally integrated over time , 2000, ECCV 2000.

[7]  Katsuhiko Sakaue,et al.  Face Registration Using Wearable Active Vision Systems for Augmented Memory , 2002 .

[8]  David W. Murray,et al.  Simultaneous Localization and Map-Building Using Active Vision , 2002, IEEE Trans. Pattern Anal. Mach. Intell..

[9]  Katsuhiko Sakaue,et al.  A panorama-based method of personal positioning and orientation and its real-time applications for wearable computers , 2001, Proceedings Fifth International Symposium on Wearable Computers.

[10]  David W. Murray,et al.  Designing a miniature wearable visual robot , 2002, Proceedings 2002 IEEE International Conference on Robotics and Automation (Cat. No.02CH37292).

[11]  Nobuyuki Kita,et al.  3D simultaneous localisation and map-building using active vision for a robot moving on undulating terrain , 2001, Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. CVPR 2001.

[12]  Hugh Durrant-Whyte,et al.  Toward Deployment of Large Scale Simultaneous Localisation and Map Building (SLAM) Systems , 2000 .

[13]  Eric Foxlin,et al.  Generalized architecture for simultaneous localization, auto-calibration, and map-building , 2002, IEEE/RSJ International Conference on Intelligent Robots and Systems.

[14]  Andrew J. Davison,et al.  Real-time simultaneous localisation and mapping with a single camera , 2003, Proceedings Ninth IEEE International Conference on Computer Vision.

[15]  Takashi Matsuyama,et al.  Active wearable vision sensor: detecting person's blink points and estimating human motion trajectory , 2003, Proceedings 2003 IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM 2003).

[16]  David W. Murray,et al.  Wearable visual robots , 2000, Digest of Papers. Fourth International Symposium on Wearable Computers.

[17]  John J. Leonard,et al.  A Computationally Efficient Method for Large-Scale Concurrent Mapping and Localization , 2000 .