Amplifying human ability through autonomics and machine learning in IMPACT

Amplifying human ability for controlling complex environments featuring autonomous units can be aided by learned models of human and system performance. In developing a command and control system that allows a small number of people to control a large number of autonomous teams, we employ an autonomics framework to manage the networks that represent mission plans and the networks that are composed of human controllers and their autonomous assistants. Machine learning allows us to build models of human and system performance useful for monitoring plans and managing human attention and task loads. Machine learning also aids in the development of tactics that human supervisors can successfully monitor through the command and control system.

[1]  Kenneth O. Stanley,et al.  Generating large-scale neural networks through discovering geometric regularities , 2007, GECCO '07.

[2]  Robert S. Gutzwiller,et al.  A Design Pattern for Working Agreements in Human-Autonomy Teaming , 2017, AHFE.

[3]  Kenneth O. Stanley,et al.  A Hypercube-Based Encoding for Evolving Large-Scale Neural Networks , 2009, Artificial Life.

[4]  B. J. Fogg,et al.  Can computers be teammates? , 1996, Int. J. Hum. Comput. Stud..

[5]  Raja Parasuraman,et al.  Human-Automation Interaction , 2005 .

[6]  Leif E. Peterson K-nearest neighbor , 2009, Scholarpedia.

[7]  J. G. Hollands,et al.  Engineering Psychology and Human Performance , 1984 .

[8]  Jessie Y. C. Chen,et al.  A Model of Human-Robot Trust , 2011 .

[9]  Jessie Y. C. Chen,et al.  The influence of modality and transparency on trust in human-robot interaction , 2014, 2014 IEEE International Inter-Disciplinary Conference on Cognitive Methods in Situation Awareness and Decision Support (CogSIMA).

[10]  Robert F. Willard,et al.  Rediscover the Art of Command and Control(統帥術を再考する) , 2003 .

[11]  Robert S. Gutzwiller,et al.  Human interactive machine learning for trust in teams of autonomous robots , 2017, 2017 IEEE Conference on Cognitive and Computational Aspects of Situation Management (CogSIMA).

[12]  Risto Miikkulainen,et al.  Evolving Neural Networks through Augmenting Topologies , 2002, Evolutionary Computation.

[13]  C. Nass,et al.  Trust in Computers: The Computers-Are-Social-Actors (CASA) Paradigm and Trustworthiness Perception in Human-Computer Communication , 2010 .

[14]  Raja Parasuraman,et al.  Humans and Automation: Use, Misuse, Disuse, Abuse , 1997, Hum. Factors.

[15]  Ewart de Visser,et al.  Measurement of trust in human-robot collaboration , 2007, 2007 International Symposium on Collaborative Technologies and Systems.

[16]  John D. Lee,et al.  Trust in Automation: Designing for Appropriate Reliance , 2004 .

[17]  V. Groom,et al.  Can robots be teammates?: Benchmarks in human–robot teams , 2007 .