AAS 18-363 DEEP LEARNING FOR AUTONOMOUS LUNAR LANDING

Over the past few years, encouraged by advancements in parallel computing technologies (e.g., Graphic Processing Units, GPUs), availability of massive labeled data as well as breakthrough in understanding of deep neural networks, there has been an explosion of machine learning algorithms that can accurately process images for classification and regression tasks. It is expected that deep learning methods will play a critical role in autonomous and intelligent space guidance problems. The goal of this paper is to design a set of deep neural networks, i.e. Convolutional Neural Networks (CNN) and Recurrent Neural Net-works (RNN) which are able to predict the fuel-optimal control actions to perform autonomous Moon landing, using only raw images taken by on board optimal cameras. Such approach can be employed to directly select actions with-out the need of direct filters for state estimation. Indeed, the optimal guidance is determined processing the images only. For this purpose, Supervised Machine Learning algorithms are designed and tested. In this framework, deep networks are trained with many example inputs and their desired outputs (labels), given by a supervisor. During the training phase, the goal is to model the unknown functional relationship that links the given inputs with the given outputs. Inputs and labels come from a properly generated dataset. The images associated to each state are the inputs and the fueloptimal control actions are the labels. Here we consider two possible scenarios, i.e. 1) a vertical 1-D Moon landing and 2) a planar 2-D Moon landing. For both cases, fuel-optimal trajectories are generated by soft-ware packages such as the General Pseudospectral Optimal Control Software (GPOPS) considering a set of initial conditions. With this dataset a training phase is performed. Subsequently, in order to improve the network accuracy a Dataset Aggregation (Dagger) approach is applied. Performances are verified on test optimal trajectories never seen by the networks.

[1]  Alexander May,et al.  Lessons learned from OSIRIS-REx autonomous navigation using natural feature tracking , 2017, 2017 IEEE Aerospace Conference.

[2]  Allan R. Klumpp,et al.  Apollo lunar descent guidance , 1974, Autom..

[3]  Yanning Guo,et al.  Applications of Generalized Zero-Effort-Miss/Zero-Effort-Velocity Feedback Guidance Algorithm , 2013 .

[4]  Roberto Furfaro,et al.  Terminal Guidance for Lunar Landing and Retargeting Using a Hybrid Control Strategy , 2016 .

[5]  Dario Izzo,et al.  Real-time optimal control via Deep Neural Networks: study on landing problems , 2016, ArXiv.

[6]  Anil V. Rao,et al.  GPOPS-II , 2014, ACM Trans. Math. Softw..

[7]  Roberto Furfaro,et al.  Optimal sliding guidance algorithm for Mars powered descent phase , 2016 .

[8]  Allan R. Klumpp A manually retargeted automatic landing system for lunar module /LM/. , 2003 .

[9]  Geoffrey J. Gordon,et al.  A Reduction of Imitation Learning and Structured Prediction to No-Regret Online Learning , 2010, AISTATS.

[10]  Ping Lu,et al.  Survey of convex optimization for aerospace applications , 2017 .

[11]  Tara N. Sainath,et al.  Deep Neural Networks for Acoustic Modeling in Speech Recognition: The Shared Views of Four Research Groups , 2012, IEEE Signal Processing Magazine.

[12]  Roberto Furfaro,et al.  Adaptive pinpoint and fuel efficient mars landing using reinforcement learning , 2012, IEEE/CAA Journal of Automatica Sinica.

[13]  Daniel J. Scheeres,et al.  Characterizing and navigating small bodies with imaging data , 2006 .

[14]  Jules Simo,et al.  Neural-based trajectory shaping approach for terminal planetary pinpoint guidance , 2013 .

[15]  K Mikael,et al.  Deep Learning for NLP , 2013 .