Command and Control of Teams of Autonomous Systems

The command and control of teams of autonomous vehicles provides a strong model of the control of cyber-physical systems in general. Using the definition of command and control for military systems, we can recognize the requirements for the operational control of many systems and see some of the problems that must be resolved. Among these problems are the need to distinguish between aberrant behaviors and optimal but quirky behaviors so that the human commander can determine if the behaviors conform to standards and align with mission goals. Similarly the commander must able to recognize when goals will not be met in order to reapportion assets available to the system. Robustness in the face of a highly variable environment can be met through machine learning, but must be done in a way that the tactics employed are recognizable as correct. Finally, because cyber-physical systems will involve decisions that must be made at great speed, we consider the use of the Rainbow framework for autonomics to provide rapid but robust command and control at pace.

[1]  Amy R. Pritchett,et al.  Development and Evaluation of a Cockpit Decision-Aid for Emergency Trajectory Generation , 2001 .

[2]  Kenneth O. Stanley,et al.  Generative encoding for multiagent learning , 2008, GECCO '08.

[3]  Raja Parasuraman,et al.  Complacency and Bias in Human Use of Automation: An Attentional Integration , 2010, Hum. Factors.

[4]  Gerald DeJong,et al.  Explanation-Based Learning: An Alternative View , 2005, Machine Learning.

[5]  David C. Nagel,et al.  Human factors in aviation , 1988 .

[6]  E A Feigenbaum,et al.  Knowledge Engineering , 1984, Annals of the New York Academy of Sciences.

[7]  A. Gonzalez,et al.  Falconet: force-feedback approach for learning from coaching and observation using natural and experiential training , 2009 .

[8]  Kathleen L. Mosier,et al.  Aircrews and Automation Bias: The Advantages of Teamwork? , 2001 .

[9]  Claude Sammut,et al.  Learning to Fly , 1992, ML.

[10]  Christopher D. Wickens,et al.  Humans: Still Vital After All These Years of Automation , 2008, Hum. Factors.

[11]  Philip J. Smith,et al.  Design of a Cooperative Problem-Solving System for En-Route Flight Planning: An Empirical Evaluation , 1994 .

[12]  Bradley R. Schmerl,et al.  Rainbow: Architecture-Based Self-Adaptation with Reusable Infrastructure , 2004, Computer.

[13]  Avelino J. Gonzalez,et al.  Learning tactical human behavior through observation of human performance , 2006, IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics).

[14]  Liling Ren,et al.  Interaction of Automation and Time Pressure in a Route Replanning Task , 2002 .

[15]  David Garlan,et al.  Rainbow: architecture-based self-adaptation with reusable infrastructure , 2004 .

[16]  Joel Lehman,et al.  Evolving policy geometry for scalable multiagent learning , 2010, AAMAS.

[17]  Sukhan Lee,et al.  Machine acquisition of skills by neural networks , 1991, IJCNN-91-Seattle International Joint Conference on Neural Networks.

[18]  S. B. Filskov,et al.  Clinical detection of intellectual deterioration associated with brain damage. , 1984, Journal of clinical psychology.

[19]  Raja Parasuraman,et al.  Performance Consequences of Automation-Induced 'Complacency' , 1993 .