Multimodal information-theoretic measures for autonomous exploration

Autonomous underwater vehicles (AUVs) are widely used to perform information gathering missions in unseen environments. Given the sheer size of the ocean environment, and the time and energy constraints of an AUV, it is important to consider the potential utility of candidate missions when performing survey planning. In this paper, we utilise a multimodal learning approach to capture the relationship between in-situ visual observations, and shipborne bathymetry (ocean depth) data that are freely available a priori. We then derive information-theoretic measures under this model that predict the amount of visual information gain at an unobserved location based on the bathymetric features. Unlike previous approaches, these measures consider the value of additional visual features, rather than just the habitat labels obtained. Experimental results with a toy dataset and real marine data demonstrate that the approach can be used to predict the true utility of unexplored areas.

[1]  Nitish Srivastava,et al.  Multimodal learning with deep Boltzmann machines , 2012, J. Mach. Learn. Res..

[2]  Stefan B. Williams,et al.  Multi-Scale Measures of Rugosity, Slope and Aspect from Benthic Stereo Image Reconstructions , 2012, PloS one.

[3]  Stefan B. Williams,et al.  Multi-modality learning from visual and remotely sensed data , 2015 .

[4]  Gaurav S. Sukhatme,et al.  Optimizing waypoints for monitoring spatiotemporal phenomena , 2013, Int. J. Robotics Res..

[5]  David J. C. MacKay,et al.  Information Theory, Inference, and Learning Algorithms , 2004, IEEE Transactions on Information Theory.

[6]  Yihong Gong,et al.  Linear spatial pyramid matching using sparse coding for image classification , 2009, CVPR.

[7]  Stefan B. Williams,et al.  Autonomous exploration of large-scale benthic environments , 2013, 2013 IEEE International Conference on Robotics and Automation.

[8]  Pascal Vincent,et al.  Stacked Denoising Autoencoders: Learning Useful Representations in a Deep Network with a Local Denoising Criterion , 2010, J. Mach. Learn. Res..

[9]  Geoffrey E. Hinton,et al.  Implicit Mixtures of Restricted Boltzmann Machines , 2008, NIPS.

[10]  Stefan B. Williams,et al.  AUV Benthic Habitat Mapping in South Eastern Tasmania , 2009, FSR.

[11]  Stefan B. Williams,et al.  Toward adaptive benthic habitat mapping using gaussian process classification , 2010, J. Field Robotics.

[12]  Craig J. Brown,et al.  Benthic habitat mapping: A review of progress towards improved understanding of the spatial ecology of the seafloor using acoustic techniques , 2011 .

[13]  Stefan B. Williams,et al.  Multimodal learning for autonomous underwater vehicles from visual and bathymetric data , 2014, 2014 IEEE International Conference on Robotics and Automation (ICRA).

[14]  Daniel Steinberg An Unsupervised Approach to Modelling Visual Data , 2013 .

[15]  Juhan Nam,et al.  Multimodal Deep Learning , 2011, ICML.

[16]  Geoffrey A. Hollinger,et al.  Sampling-based Motion Planning for Robotic Information Gathering , 2013, Robotics: Science and Systems.