AI4MARS: A Dataset for Terrain-Aware Autonomous Driving on Mars

Deep learning has quickly become a necessity for self-driving vehicles on Earth. In contrast, the self-driving vehicles on Mars, including NASA’s latest rover, Perseverance, which is planned to land on Mars in February 2021, are still driven by classical machine vision systems. Deep learning capabilities, such as semantic segmentation and object recognition, would substantially benefit the safety and productivity of ongoing and future missions to the red planet. To this end, we created the first large-scale dataset, AI4Mars, for training and validating terrain classification models for Mars, consisting of ~326K semantic segmentation full image labels on 35K images from Curiosity, Opportunity, and Spirit rovers, collected through crowdsourcing. Each image was labeled by ~10 people to ensure greater quality and agreement of the crowdsourced labels. It also includes ~1.5K validation labels annotated by the rover planners and scientists from NASA’s MSL (Mars Science Laboratory) mission, which operates the Curiosity rover, and MER (Mars Exploration Rovers) mission, which operated the Spirit and Opportunity rovers. We trained a DeepLabv3 model on the AI4Mars training dataset and achieved over 96% overall classification accuracy on the test set. The dataset is made publicly available.1 2

[1]  Shoya Higa,et al.  Vision-Based Estimation of Driving Energy for Planetary Rovers Using Deep Learning and Terramechanics , 2019, IEEE Robotics and Automation Letters.

[2]  Trevor Darrell,et al.  BDD100K: A Diverse Driving Video Database with Scalable Annotation Tooling , 2018, ArXiv.

[3]  J.J. Biesiadecki,et al.  The Mars Exploration Rover surface mobility flight software driving ambition , 2006, 2006 IEEE Aerospace Conference.

[4]  Mark Maimone,et al.  Driving Curiosity: Mars Rover Mobility Trends During the First Seven Years , 2020, 2020 IEEE Aerospace Conference.

[5]  Shoya Higa,et al.  MAARS: Machine learning-based Analytics for Automated Rover Systems , 2020, 2020 IEEE Aerospace Conference.

[6]  Masahiro Ono,et al.  SPOC: Deep Learning-based Terrain Classification for Mars Rover Missions , 2016 .

[7]  Umaa Rebbapragada,et al.  COSMIC: Content-based Onboard Summarization to Monitor Infrequent Change , 2020, 2020 IEEE Aerospace Conference.

[8]  Masahiro Ono,et al.  Fast approximate clearance evaluation for rovers with articulated suspension systems , 2018, J. Field Robotics.

[9]  M. McHenry,et al.  A ROS-based Simulator for Testing the Enhanced Autonomous Navigation of the Mars 2020 Rover , 2020, 2020 IEEE Aerospace Conference.

[10]  Steven Bamford,et al.  The Moon Zoo citizen science project: Preliminary results for the Apollo 17 landing site , 2016, 1602.01664.

[11]  Masahiro Ono,et al.  Active localization for planetary rovers , 2016, 2016 IEEE Aerospace Conference.

[12]  Paul Newman,et al.  1 year, 1000 km: The Oxford RobotCar dataset , 2017, Int. J. Robotics Res..

[13]  David Salomon,et al.  Transformations and projections in computer graphics , 2006 .

[14]  George Papandreou,et al.  Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation , 2018, ECCV.

[15]  Qiang Xu,et al.  nuScenes: A Multimodal Dataset for Autonomous Driving , 2019, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[16]  William Whittaker,et al.  Locally-adaptive slip prediction for planetary rovers using Gaussian processes , 2017, 2017 IEEE International Conference on Robotics and Automation (ICRA).

[17]  Chen Sun,et al.  Revisiting Unreasonable Effectiveness of Data in Deep Learning Era , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).

[18]  Andreas Geiger,et al.  Vision meets robotics: The KITTI dataset , 2013, Int. J. Robotics Res..

[19]  Robert L. Tokar,et al.  Mars odyssey neutron sensing of the south residual polar cap , 2003 .

[20]  Megan E. Schwamb,et al.  Planet Four: Probing springtime winds on Mars by mapping the southern polar CO2 jet deposits , 2018, Icarus.

[21]  Sebastian Ramos,et al.  The Cityscapes Dataset for Semantic Urban Scene Understanding , 2016, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[22]  M. Klimesh,et al.  Mars Exploration Rover engineering cameras , 2003 .

[23]  Raymond E. Arvidson,et al.  Relating geologic units and mobility system kinematics contributing to Curiosity wheel damage at Gale Crater, Mars , 2017 .

[24]  M. Woods,et al.  LabelMars: Creating an extremely large Martian image dataset through machine learning , 2019 .

[25]  Michael S. Bernstein,et al.  ImageNet Large Scale Visual Recognition Challenge , 2014, International Journal of Computer Vision.

[26]  Jian Sun,et al.  Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[27]  Justin N. Maki,et al.  The Mars Science Laboratory Engineering Cameras , 2012 .