6-DOF Pose Estimation of a Robotic Navigation Aid by Tracking Visual and Geometric Features

This paper presents a 6-DOF pose estimation (PE) method for a robotic navigation aid (RNA) for the visually impaired. The RNA uses a single 3D camera for PE and object detection. The proposed method processes the camera's intensity and range data to estimates the camera's egomotion that is then used by an extended Kalman filter (EKF) as the motion model to track a set of visual features for PE. A RANSAC process is employed in the EKF to identify inliers from the visual feature correspondences between two image frames. Only the inliers are used to update the EKF's state. The EKF integrates the egomotion into the camera's pose in the world coordinate system. To retain the EKF's consistency, the distance between the camera and the floor plane (extracted from the range data) is used by the EKF as the observation of the camera's z coordinate. Experimental results demonstrate that the proposed method results in accurate pose estimates for positioning the RNA in indoor environments. Based on the PE method, a wayfinding system is developed for localization of the RNA in a home environment. The system uses the estimated pose and the floorplan to locate the RNA user in the home environment and announces the points of interest and navigational commands to the user through a speech interface.

[1]  Wolfram Burgard,et al.  A Tutorial on Graph-Based SLAM , 2010, IEEE Intelligent Transportation Systems Magazine.

[2]  Henrik I. Christensen,et al.  Planar surface SLAM with 3D and 2D sensors , 2012, 2012 IEEE International Conference on Robotics and Automation.

[3]  C. Ye,et al.  NCC-RANSAC: A fast plane extraction method for navigating a smart cane for the visually impaired , 2013, 2013 IEEE International Conference on Automation Science and Engineering (CASE).

[4]  Joel A. Hesch,et al.  Design and Analysis of a Portable Indoor Localization Aid for the Visually Impaired , 2010, Int. J. Robotics Res..

[5]  Koji Tsukada,et al.  ActiveBelt: Belt-Type Wearable Tactile Display for Directional Navigation , 2004, UbiComp.

[6]  Cang Ye,et al.  Performance evaluation of a Pose Estimation method based on the SwissRanger SR4000 , 2012, 2012 IEEE International Conference on Mechatronics and Automation.

[7]  Albert S. Huang,et al.  Visual Odometry and Mapping for Autonomous Flight Using an RGB-D Camera , 2011, ISRR.

[8]  Li Ling,et al.  An Iterated Extended Kalman Filter for 3D mapping via Kinect camera , 2013, 2013 IEEE International Conference on Acoustics, Speech and Signal Processing.

[9]  P. Ptasinski,et al.  A GPS based navigation aid for the blind , 2003, 17th International Conference on Applied Electromagnetics and Communications, 2003. ICECom 2003..

[10]  G LoweDavid,et al.  Distinctive Image Features from Scale-Invariant Keypoints , 2004 .

[11]  Paul H. J. Kelly,et al.  Dense planar SLAM , 2014, 2014 IEEE International Symposium on Mixed and Augmented Reality (ISMAR).

[12]  Roland Siegwart,et al.  3D SLAM using planar segments , 2006, 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[13]  Olivier Stasse,et al.  MonoSLAM: Real-Time Single Camera SLAM , 2007, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[14]  Cang Ye Navigating a Portable Robotic Device by a 3D imaging sensor , 2010, 2010 IEEE Sensors.

[15]  T. Oggier,et al.  4 . 1 3 D-Imaging in Real-Time with Miniaturized Optical Range Camera , 2005 .

[16]  K. S. Arun,et al.  Least-Squares Fitting of Two 3-D Point Sets , 1987, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[17]  Arthur Gerard DeVoe,et al.  Estimated Statistics on Blindness and Vision Problems. , 1967 .

[18]  Javier Civera,et al.  1‐Point RANSAC for extended Kalman filtering: Application to real‐time structure from motion and visual odometry , 2010, J. Field Robotics.

[19]  Cang Ye,et al.  A visual odometry method based on the SwissRanger SR4000 , 2010, Defense + Commercial Sensing.

[20]  François Goulette,et al.  Accurate 3D maps from depth images and motion sensors via nonlinear Kalman filtering , 2012, 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[21]  Gérard G. Medioni,et al.  Robot vision for the visually impaired , 2010, 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Workshops.

[22]  Javier Civera,et al.  Inverse Depth Parametrization for Monocular SLAM , 2008, IEEE Transactions on Robotics.

[23]  Juan Manuel Sáez,et al.  First Steps towards Stereo-based 6DOF SLAM for the Visually Impaired , 2005, 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05) - Workshops.

[24]  James R. Bergen,et al.  Visual odometry , 2004, Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2004. CVPR 2004..

[25]  John Nicholson,et al.  Robot-assisted wayfinding for the visually impaired in structured indoor environments , 2006, Auton. Robots.

[26]  Hugh Durrant-Whyte,et al.  Simultaneous localization and mapping (SLAM): part II , 2006 .

[27]  R. Sargent,et al.  Combined feature based and shape based visual tracker for robot navigation , 2005, 2005 IEEE Aerospace Conference.

[28]  M. D. Adams,et al.  Lidar design, use, and calibration concepts for correct environmental detection , 2000, IEEE Trans. Robotics Autom..

[29]  G. Medioni,et al.  RGB-D camera Based Navigation for the Visually Impaired , 2011 .

[30]  Hugh F. Durrant-Whyte,et al.  Simultaneous localization and mapping: part I , 2006, IEEE Robotics & Automation Magazine.