Self-supervised learning as an enabling technology for future space exploration robots: ISS experiments on monocular distance learning

Abstract Although machine learning holds an enormous promise for autonomous space robots, it is currently not employed because of the inherent uncertain outcome of learning processes. In this article we investigate a learning mechanism, Self-Supervised Learning (SSL), which is very reliable and hence an important candidate for real-world deployment even on safety-critical systems such as space robots. To demonstrate this reliability, we introduce a novel SSL setup that allows a stereo vision equipped robot to cope with the failure of one of its cameras. The setup learns to estimate average depth using a monocular image, by using the stereo vision depths from the past as trusted ground truth. We present preliminary results from an experiment on the International Space Station (ISS) performed with the MIT/NASA SPHERES VERTIGO satellite. The presented experiments were performed on October 8th, 2015 on board the ISS. The main goals were (1) data gathering, and (2) navigation based on stereo vision. First the astronaut Kimiya Yui moved the satellite around the Japanese Experiment Module to gather stereo vision data for learning. Subsequently, the satellite freely explored the space in the module based on its (trusted) stereo vision system and a pre-programmed exploration behavior, while simultaneously performing the self-supervised learning of monocular depth estimation on board. The two main goals were successfully achieved, representing the first online learning robotic experiments in space. These results lay the groundwork for a follow-up experiment in which the satellite will use the learned single-camera depth estimation for autonomous exploration in the ISS, and are an advancement towards future space robots that continuously improve their navigation capabilities over time, even in harsh and completely unknown space environments.

[1]  Geoffrey E. Hinton,et al.  ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.

[2]  H. W. Ho,et al.  Optical flow for self-supervised learning of obstacle appearance , 2015, 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[3]  Arjan Kuijper,et al.  Robust, fast and accurate vision-based localization of a cooperative target used for space robotic arm , 2017 .

[4]  Urs A. Muller,et al.  Real-time adaptive off-road vehicle navigation and terrain classification , 2013, Defense, Security, and Sensing.

[5]  Alin Albu-Schäffer,et al.  DLR's robotics technologies for on-orbit servicing , 2004, Adv. Robotics.

[6]  Takashi Kubota,et al.  Autonomous Terrain Classification With Co- and Self-Training Approach , 2016, IEEE Robotics and Automation Letters.

[7]  M. Klimesh,et al.  Mars Exploration Rover engineering cameras , 2003 .

[8]  Andre Schiele,et al.  Affordance‐based indirect task communication for astronaut‐robot cooperation , 2012, J. Field Robotics.

[9]  J. Koenderink Q… , 2014, Les noms officiels des communes de Wallonie, de Bruxelles-Capitale et de la communaute germanophone.

[10]  Urs A. Muller,et al.  Learning long-range vision for autonomous off-road driving , 2009 .

[11]  Giorgio Panin,et al.  Vision-based localization for on-orbit servicing of a partially cooperative satellite , 2015 .

[12]  Andrew E. Johnson,et al.  Computer Vision on Mars , 2007, International Journal of Computer Vision.

[13]  José Barata,et al.  On Exploiting Haptic Cues for Self-Supervised Learning of Depth-Based Robot Navigation Affordances , 2015, J. Intell. Robotic Syst..

[14]  Brent E. Tweddle,et al.  Computer vision-based localization and mapping of an unknown, uncooperative and spinning target for spacecraft proximity operations , 2013 .

[15]  Andreas Geiger,et al.  Efficient Large-Scale Stereo Matching , 2010, ACCV.

[16]  Sebastian Thrun,et al.  Reverse Optical Flow for Self-Supervised Adaptive Autonomous Robot Navigation , 2007, International Journal of Computer Vision.

[17]  Sebastian Thrun,et al.  Adaptive Road Following using Self-Supervised Learning and Reverse Optical Flow , 2005, Robotics: Science and Systems.

[18]  Hao Wang,et al.  Graph-tree-based software control flow checking for COTS processors on pico-satellites , 2013 .

[19]  Dario Izzo,et al.  Persistent self-supervised learning: From stereo to monocular vision for obstacle avoidance , 2016, ArXiv.

[20]  M. Maimone,et al.  Overview of the Mars Exploration Rovers ’ Autonomous Mobility and Vision Capabilities , 2007 .

[21]  Sebastian Thrun,et al.  Stanley: The robot that won the DARPA Grand Challenge , 2006, J. Field Robotics.

[22]  Dario Izzo,et al.  Persistent self-supervised learning principle: from stereo to monocular vision for obstacle avoidance , 2016 .

[23]  Gerd Hirzinger,et al.  Sensor-based space robotics-ROTEX and its telerobotic features , 1993, IEEE Trans. Robotics Autom..

[24]  K. Kinzler,et al.  Perceiving distance accurately by a directional process of integrating ground information , 2022 .

[25]  Zhiguo Jiang,et al.  Multi-view space object recognition and pose estimation based on kernel regression , 2014 .

[26]  Alvar Saenz-Otero,et al.  An Open Research Facility for Vision‐Based Navigation Onboard the International Space Station , 2016, J. Field Robotics.

[27]  Zhiguo Jiang,et al.  Vision-based pose estimation for cooperative space objects , 2013 .

[28]  Andrew E. Johnson,et al.  Mars Exploration Rover engineering cameras : Mars exploration rover mission and landing sites , 2003 .