Unmanned Vehicle Operations: Countering Imperfect Information in an Adversarial Environment

Command and Control (C 2 ) decisions must of necessity be based on imperfect knowledge of the battlespace. With the advent of unmanned air vehicles (UAVs), the pace at which decision points arise will increase. This necessitates the development of automated C 2 tools for use in the decision-making process. Estimation and control in the presence of purely random input noise is well-understood, and produces excellent results. In the context of C 2 decisions, one most consider observations contaminated by both random noise and adversarially induced \noise". Consequently, zero-sum, discrete, stochastic games under imperfect observations are considered here. Machinery has recently been developed which allows one to solve such problems. The theory is summarized. For problems in the class considered here, the resulting algorithms are computationally feasible. The method is applied on a small game testbed. The behavior of the resulting controls are discussed. An alternate (naive) approach is to apply the optimal state feedback game controls to the maximum likelihood state. This alternate approach is susceptible to deception by the opponent. It is shown that the improvements in using the robust approach range from small to tremendous depending on certain factors.

[1]  Rajdeep Singh,et al.  Unmanned Vehicle Decision Making under Imperfect Information in an Adversarial Environment , 2004 .

[2]  Milton B. Adams,et al.  Closed-Loop Hierarchical Control of Military Air Operations , 2002 .

[3]  R. Elliott,et al.  The Existence Of Value In Differential Games , 1972 .

[4]  William M. McEneaney A class of reasonably tractable partially observed discrete stochastic games , 2002, Proceedings of the 41st IEEE Conference on Decision and Control, 2002..

[5]  M. James,et al.  Extending H-infinity Control to Nonlinear Systems: Control of Nonlinear Systems to Achieve Performance Objectives , 1987 .

[6]  W.M. McEneaney,et al.  Stochastic game approach to air operations , 2004, IEEE Transactions on Aerospace and Electronic Systems.

[7]  T. Başar,et al.  Dynamic Noncooperative Game Theory , 1982 .

[8]  William M. McEneaney,et al.  Some Classes of Imperfect Information Finite State-Space Stochastic Games with Finite-Dimensional Solutions , 2004 .

[9]  W. Fleming,et al.  Optimal Control for Partially Observed Diffusions , 1982 .

[10]  Debasish Ghose,et al.  Game theoretic campaign modeling and analysis , 2000, Proceedings of the 39th IEEE Conference on Decision and Control (Cat. No.00CH37187).

[11]  Mark L. Hanson,et al.  Mixed Initiative Planning and Control Under Uncertainty , 2002 .

[12]  H. S. Morse,et al.  The DARPA JFACC program: modeling and control of military operations , 2000, Proceedings of the 39th IEEE Conference on Decision and Control (Cat. No.00CH37187).

[13]  J. Jelinek,et al.  Model predictive control of military operations , 2000, Proceedings of the 39th IEEE Conference on Decision and Control (Cat. No.00CH37187).

[14]  Wendell H. Fleming,et al.  Max-Plus Stochastic Processes , 2004 .

[15]  G. Olsder,et al.  About When to Use the Searchlight , 1988 .

[16]  William M. McEneaney,et al.  Exploitation of an opponent's imperfect information in a stochastic game with autonomous vehicle application , 2004, 2004 43rd IEEE Conference on Decision and Control (CDC) (IEEE Cat. No.04CH37601).

[17]  William McEneaney,et al.  CONTROL FOR UAV OPERATIONS UNDER IMPERFECT INFORMATION , 2002 .

[18]  J. Filar,et al.  Competitive Markov Decision Processes , 1996 .

[19]  Huihui Jiang,et al.  Modeling and control of military operations against adversarial control , 2000, Proceedings of the 39th IEEE Conference on Decision and Control (Cat. No.00CH37187).

[20]  K. Ito,et al.  Stochastic games and inverse Lyapunov methods in air operations , 2000, Proceedings of the 39th IEEE Conference on Decision and Control (Cat. No.00CH37187).

[21]  P. Bernhard,et al.  Rabbit and hunter game: Two discrete stochastic formulations , 1987 .