Laser and Optical Flow Fusion for a Non-Intrusive Obstacle Detection System on an Intelligent Wheelchair

In this paper, a method to combine a low-cost 16 beam solid state laser sensor and a conventional video camera for obstacle detection is presented. The system is intended to form a non-intrusive virtual barrier at both sides of an intelligent wheelchair, in order to protect the user in everyday outdoor and indoor environments, like office, home, or pedestrian areas. In this type of environments, the shape of the obstacles is very heterogeneous (for instance, having a variable width along their height), so their detection via the conventional sensors installed in the wheelchair presents difficulties, especially in the most exposed areas of the user that are the sides of their torso. With the proposed system, when an obstacle intersects a beam, the intersection point is projected in the associated image and optical flow is calculated at both sides of this point. Using the optical flow, several classifiers have been trained and tested in order to automatically discern between intersections produced by the user and those produced by external obstacles. Results show that the method accurately detects the presence of obstacles and the direction of their movement.

[1]  Yiannis Demiris,et al.  Collaborative Control for a Robotic Wheelchair: Evaluation of Performance, Attention, and Workload , 2012, IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics).

[2]  Arturo de la Escalera,et al.  Autonomous off-road navigation using stereo-vision and laser-rangefinder fusion for outdoor obstacles detection , 2016, 2016 IEEE Intelligent Vehicles Symposium (IV).

[3]  Xiangyan Tang,et al.  Obstacle detection based on image and laser points fusion for a small ground robot , 2015, ICIA.

[4]  Q. M. Jonathan Wu,et al.  Cooperative fusion for road obstacles detection using laser scanner and camera , 2016, 2016 12th World Congress on Intelligent Control and Automation (WCICA).

[5]  M. Kreutner,et al.  Smart wheelchair perception using odometry, ultrasound sensors, and camera , 2009, Robotica.

[6]  John R. Spletzer,et al.  To the Bookstore! Autonomous Wheelchair Navigation in an Urban Environment , 2012, FSR.

[7]  Rafael Arnay,et al.  Using Kinect on an Autonomous Vehicle for Outdoors Obstacle Detection , 2016, IEEE Sensors Journal.

[8]  Luís Paulo Reis,et al.  A Survey on Intelligent Wheelchair Prototypes and Simulators , 2014, WorldCIST.

[9]  Federico Castanedo,et al.  A Review of Data Fusion Techniques , 2013, TheScientificWorldJournal.

[10]  Christoph Stiller,et al.  Laser Scanner and Camera Fusion for Automatic Obstacle Classification in ADAS Application , 2015, SMARTGREENS/VEHITS.

[11]  Yiannis Demiris,et al.  Human-wheelchair collaboration through prediction of intention and adaptive assistance , 2008, 2008 IEEE International Conference on Robotics and Automation.

[12]  Tomás Svoboda,et al.  Robust Data Fusion of Multimodal Sensory Information for Mobile Robots , 2015, J. Field Robotics.

[13]  Alex Zelinsky,et al.  Learning OpenCV---Computer Vision with the OpenCV Library (Bradski, G.R. et al.; 2008)[On the Shelf] , 2009, IEEE Robotics & Automation Magazine.

[14]  Sebastian Budzan Fusion of Visual and Range Images for Object Extraction , 2014, ICCVG.

[15]  Morgan Quigley,et al.  ROS: an open-source Robot Operating System , 2009, ICRA 2009.