A vision-based perception framework for outdoor navigation tasks applicable to legged robots

A vision-based perception system for understanding the outdoor environment has been proposed. This system utilizes and applies the useful information from the binocular cameras of a legged robot for landform — terrain — ground environmental perception. The ground texture recognition can provide information about the material of the ground then a legged robot can choose an appropriate walking pose accordingly. With the terrain perception and obstacle detection, combined with semantic segmentation of the environment, a legged robot can move towards the target with intelligent obstacle avoidance strategy. Based on this vision-based perception system, we propose a vision-based perception framework for outdoor navigation tasks applicable to legged robots. In this framework, environmental modeling and situation assessment will be firstly carried out combined with multi-mode sensor fusion, and then footsteps and local path planning can be generated.

[1]  Iasonas Kokkinos,et al.  Describing Textures in the Wild , 2013, 2014 IEEE Conference on Computer Vision and Pattern Recognition.

[2]  Steven Dubowsky,et al.  Online terrain parameter estimation for wheeled mobile robots with application to planetary rovers , 2004, IEEE Transactions on Robotics.

[3]  Luc Van Gool,et al.  SURF: Speeded Up Robust Features , 2006, ECCV.

[4]  Heiko Hirschmüller,et al.  Stereo Processing by Semiglobal Matching and Mutual Information , 2008, IEEE Trans. Pattern Anal. Mach. Intell..

[5]  Jürgen Schmidhuber,et al.  A Machine Learning Approach to Visual Perception of Forest Trails for Mobile Robots , 2016, IEEE Robotics and Automation Letters.

[6]  Gregory Z. Grudic,et al.  Learning terrain segmentation with classifier ensembles for autonomous robot navigation in unstructured environments , 2009, J. Field Robotics.

[7]  Larry H. Matthies,et al.  Robust multi-sensor, day/night 6-DOF pose estimation for a dynamic legged vehicle in GPS-denied environments , 2012, 2012 IEEE International Conference on Robotics and Automation.

[8]  Horst Bischof,et al.  Fast Approximated SIFT , 2006, ACCV.

[9]  Steven Dubowsky,et al.  Vibration-based Terrain Analysis for Mobile Robots , 2005, Proceedings of the 2005 IEEE International Conference on Robotics and Automation.

[10]  Ying Zhang,et al.  Trajectory planning and posture adjustment of a quadruped robot for obstacle striding , 2011, 2011 IEEE International Conference on Robotics and Biomimetics.

[11]  Bin Li,et al.  Research of mammal bionic quadruped robots: A review , 2011, 2011 IEEE 5th International Conference on Robotics, Automation and Mechatronics (RAM).

[12]  Geoffrey E. Hinton,et al.  ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.

[13]  Ian R. Manchester,et al.  Bounding on rough terrain with the LittleDog robot , 2011, Int. J. Robotics Res..

[14]  H. Hirschmüller Ieee Transactions on Pattern Analysis and Machine Intelligence 1 Stereo Processing by Semi-global Matching and Mutual Information , 2022 .

[15]  A SURVEY OF TEXTURE CLASSIFICATION USING RECENT TECHNOLOGY , 2014 .

[16]  B. S. Manjunath,et al.  Visual Terrain Classification For Legged Robots , 2011 .

[17]  Trevor Darrell,et al.  DeCAF: A Deep Convolutional Activation Feature for Generic Visual Recognition , 2013, ICML.

[18]  Loris Nanni,et al.  Survey on LBP based texture descriptors for image classification , 2012, Expert Syst. Appl..

[19]  Robin Deits,et al.  Computing Large Convex Regions of Obstacle-Free Space Through Semidefinite Programming , 2014, WAFR.

[20]  Martin Buehler Dynamic locomotion with one, four and six-legged robots (特集「ロコモーション」) , 2002 .

[21]  Reinhard Blickhan,et al.  Positive force feedback in bouncing gaits? , 2003, Proceedings of the Royal Society of London. Series B: Biological Sciences.

[22]  Ming Liu,et al.  A deep-network solution towards model-less obstacle avoidance , 2016, 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).