Robust Science-Optimal Spacecraft Control for Circular Orbit Missions

This paper describes a Markov decision process approach to a robust spacecraft mission control policy that maximizes the expected value of science reward assuming a circular orbit. The control policy that governs mission steps can be computed off-board or onboard depending upon the availability of communication bandwidth and on-board computational resources. This paper considers a sample science mission, where the spacecraft collects data from celestial objects viewable only within a certain orbit true anomaly window. Science data collection requires the spacecraft to slew its instrument(s) toward each target, and continue pointing in the direction of the target while the spacecraft traverses its orbit. Robustness and stochastic optimization of scientific reward, is achieved at the cost of computational complexity. Approximate dynamic programming (ADP) is exploited to reduce the computational time and effort to manageable levels and to treat larger problem sizes. The proposed ADP algorithm partitions the state-space based on true anomaly regions, enabling grouping of adjacent science targets. Results of a simulation case study demonstrate that our proposed ADP approach performs quite well for reasonable ranges of key problem parameters.

[1]  Jing Huang,et al.  Framework for Optimal Fault-Tolerant Control Synthesis: Maximize Prefault While Minimize Post-Fault Behaviors , 2014, IEEE Transactions on Systems, Man, and Cybernetics: Systems.

[2]  Ron Cowen The wheels come off Kepler , 2013, Nature.

[3]  Tingwen Huang,et al.  Model-Free Optimal Tracking Control via Critic-Only Q-Learning , 2016, IEEE Transactions on Neural Networks and Learning Systems.

[4]  Rui Xu,et al.  Multi-agent planning system for spacecraft , 2003, Proceedings of the 2003 International Conference on Machine Learning and Cybernetics (IEEE Cat. No.03EX693).

[5]  Sridhar Mahadevan,et al.  Decision-Theoretic Planning with Concurrent Temporally Extended Actions , 2001, UAI.

[6]  Christoph Lenzen,et al.  A generalized timeline representation, services, and interface for automating space mission operations , 2012, SpaceOps 2012 Conference.

[7]  Alex Fukunaga,et al.  Iterative Repair Planning for Spacecraft Operations Using the Aspen System , 2000 .

[8]  Rob Sherwood,et al.  Using Iterative Repair to Increase the Responsiveness of Planning and Scheduling for Autonomous Spacecraft , 1999 .

[9]  Martin L. Puterman,et al.  Markov Decision Processes: Discrete Stochastic Dynamic Programming , 1994 .

[10]  Erann Gat,et al.  ESL: a language for supporting robust plan execution in embedded autonomous agents , 1997, 1997 IEEE Aerospace Conference.

[11]  Ali Nasir,et al.  Comprehensive Fault Tolerance and Science-Optimal Attitude Planning for Spacecraft Applications. , 2012 .

[12]  Ilya Kolmanovsky,et al.  Science optimal spacecraft attitude maneuvering while accounting for failure mode , 2011 .

[13]  Tingwen Huang,et al.  Data-based approximate policy iteration for affine nonlinear continuous-time optimal control design , 2014, Autom..

[14]  Luigi Portinale,et al.  Dynamic Bayesian Networks for Fault Detection, Identification, and Recovery in Autonomous Spacecraft , 2015, IEEE Transactions on Systems, Man, and Cybernetics: Systems.

[15]  Ronen I. Brafman,et al.  Prioritized Goal Decomposition of Markov Decision Processes: Toward a Synthesis of Classical and Decision Theoretic Planning , 1997, IJCAI.

[16]  Xenofon D. Koutsoukos,et al.  A Comprehensive Diagnosis Methodology for Complex Hybrid Systems: A Case Study on Spacecraft Power Distribution Systems , 2010, IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans.

[17]  Panos M. Pardalos,et al.  Approximate dynamic programming: solving the curses of dimensionality , 2009, Optim. Methods Softw..

[18]  et al,et al.  Overview of the Far Ultraviolet Spectroscopic Explorer Mission , 2000, astro-ph/0005529.

[19]  Peter Norvig,et al.  Artificial intelligence - a modern approach, 2nd Edition , 2003, Prentice Hall series in artificial intelligence.

[20]  Eugene Semenkin,et al.  Stochastic Models and Optimization Algorithms for Decision Support in Spacecraft Control Systems Preliminary Design , 2014 .

[21]  P. Pandurang Nayak,et al.  Remote Agent: To Boldly Go Where No AI System Has Gone Before , 1998, Artif. Intell..

[22]  M. Johnston,et al.  S PIKE : Intelligent Scheduling of Hubble Space Telescope Observations , 1994 .

[23]  Ella M. Atkins,et al.  Mission-Based Fault Reconfiguration for Spacecraft Applications , 2013, J. Aerosp. Inf. Syst..

[24]  Craig Boutilier,et al.  Decision-Theoretic Planning: Structural Assumptions and Computational Leverage , 1999, J. Artif. Intell. Res..

[25]  Huai-Ning Wu,et al.  Policy Gradient Adaptive Dynamic Programming for Data-Based Optimal Control , 2017, IEEE Transactions on Cybernetics.

[26]  Tingwen Huang,et al.  Data-Driven $H_\infty$ Control for Nonlinear Distributed Parameter Systems , 2015, IEEE Transactions on Neural Networks and Learning Systems.

[27]  David E. Smith,et al.  Planning Under Continuous Time and Resource Uncertainty: A Challenge for AI , 2002, AIPS Workshop on Planning for Temporal Domains.

[28]  Steve Chien,et al.  On Board Planning for Autonomous Spacecraft , 1999 .

[29]  Luigi Glielmo,et al.  A Markovian based approach for autonomous space systems , 2015, 2015 IEEE Metrology for Aerospace (MetroAeroSpace).

[30]  Peter Norvig,et al.  Artificial Intelligence: A Modern Approach , 1995 .

[31]  Han-Xiong Li,et al.  Adaptive Optimal Control of Highly Dissipative Nonlinear Spatially Distributed Processes With Neuro-Dynamic Programming , 2015, IEEE Transactions on Neural Networks and Learning Systems.

[32]  Joseph H. Saleh,et al.  Satellite and satellite subsystems reliability: Statistical data analysis and modeling , 2009, Reliab. Eng. Syst. Saf..