Online, self-supervised vision-based terrain classification in unstructured environments

Outdoor, unstructured and cross-country environments introduce several challenging problems such as highly complex scene geometry, ground cover variation, uncontrolled lighting, weather conditions and shadows for vision-based terrain classification of Unmanned Ground Vehicles (UGVs). Color stereo vision is mostly used for UGVs, but the present stereo vision technologies and processing algorithms are limited by cameras' field of view and maximum range, which causes the vehicles to get caught in cul-de-sacs that could possibly be avoided if the vehicle had access to information or could make inferences about the terrain well beyond the range of the vision system. The philosophy underlying the proposed strategy in this paper is to use the near-field stereo information associated with the terrain appearance to train a classifier to classify the far-field terrain well beyond the stereo range for each incoming image. To date, strategies based on this concept are limited to using single model construction and classification per frame. Although this single-model-per-frame approach can adapt to the changing environments concurrently, it lacks memory or history of past information. The approach described in this study is to use an online, self-supervised learning algorithm that exploits multiple frames to develop adaptive models that can classify different terrains the robot traverses. Preliminary but promising results of the paradigm proposed is presented using real data sets from the DARPA-LAGR project, which is the current gold standard for vision-based terrain classification using machine-learning techniques. This is followed by a proposal for future work on the development of robust terrain classifiers based on the proposed methodology.

[1]  Larry D. Jackel,et al.  The DARPA LAGR program: Goals, challenges, methodology, and phase I results , 2006, J. Field Robotics.

[2]  Stephen Grossberg,et al.  Adaptive pattern classification and universal recoding: II. Feedback, expectation, olfaction, illusions , 1976, Biological Cybernetics.

[3]  Robert C. Bolles,et al.  Mapping, navigation, and learning for off‐road traversal , 2009, J. Field Robotics.

[4]  Roberto Manduchi,et al.  Obstacle Detection and Terrain Classification for Autonomous Off-Road Navigation , 2005, Auton. Robots.

[5]  Stephen Grossberg,et al.  Fuzzy ARTMAP: A neural network architecture for incremental supervised learning of analog multidimensional maps , 1992, IEEE Trans. Neural Networks.

[6]  James S. Albus,et al.  Learning traversability models for autonomous mobile vehicles , 2008, Auton. Robots.

[7]  Stephen Grossberg,et al.  ARTMAP: supervised real-time learning and classification of nonstationary data by a self-organizing neural network , 1991, [1991 Proceedings] IEEE Conference on Neural Networks for Ocean Engineering.

[8]  Stephen Grossberg,et al.  Fuzzy ART: Fast stable learning and categorization of analog patterns by an adaptive resonance system , 1991, Neural Networks.

[9]  Gregory Z. Grudic,et al.  Learning terrain segmentation with classifier ensembles for autonomous robot navigation in unstructured environments , 2009, J. Field Robotics.

[10]  Sebastian Thrun,et al.  Reverse Optical Flow for Self-Supervised Adaptive Autonomous Robot Navigation , 2007, International Journal of Computer Vision.

[11]  Michael Happold,et al.  Enhancing Supervised Terrain Classification with Predictive Unsupervised Learning , 2006, Robotics: Science and Systems.

[12]  Larry H. Matthies,et al.  Autonomous off‐road navigation with end‐to‐end learning for the LAGR program , 2009, J. Field Robotics.

[13]  Dean Pomerleau,et al.  ALVINN, an autonomous land vehicle in a neural network , 2015 .

[14]  Stephen Grossberg,et al.  A massively parallel architecture for a self-organizing neural pattern recognition machine , 1988, Comput. Vis. Graph. Image Process..

[15]  W. Sardha Wijesoma,et al.  Improving path planning and mapping based on stereo vision and lidar , 2008, 2008 10th International Conference on Control, Automation, Robotics and Vision.