Cooperation in Human-Agent Systems to Support Resilience

Objective: This study uses a dyadic approach to understand human-agent cooperation and system resilience. Background: Increasingly capable technology fundamentally changes human-machine relationships. Rather than reliance on or compliance with more or less reliable automation, we investigate interaction strategies with more or less cooperative agents. Method: A joint-task microworld scenario was developed to explore the effects of agent cooperation on participant cooperation and system resilience. To assess the effects of agent cooperation on participant cooperation, 36 people coordinated with a more or less cooperative agent by requesting resources and responding to requests for resources in a dynamic task environment. Another 36 people were recruited to assess effects following a perturbation in their own hospital. Results: Experiment 1 shows people reciprocated the cooperative behaviors of the agents; a low-cooperation agent led to less effective interactions and less resource sharing, whereas a high-cooperation agent led to more effective interactions and greater resource sharing. Experiment 2 shows that an initial fast-tempo perturbation undermined proactive cooperation—people tended to not request resources. However, the initial fast tempo had little effect on reactive cooperation—people tended to accept resource requests according to cooperation level. Conclusion: This study complements the supervisory control perspective of human-automation interaction by considering interdependence and cooperation rather than the more common focus on reliability and reliance. Application: The cooperativeness of automated agents can influence the cooperativeness of human agents. Design and evaluation for resilience in teams involving increasingly autonomous agents should consider the cooperative behaviors of these agents.

[1]  Cleotilde Gonzalez,et al.  The use of microworlds to study dynamic decision making , 2005, Comput. Hum. Behav..

[2]  Robert L. Wears,et al.  Colliding Dilemmas: Interactions Of Locally Adaptive Strategies In A Hospital Setting , 2011 .

[3]  Kevin Crowston,et al.  The interdisciplinary study of coordination , 1994, CSUR.

[4]  Max M. Krasnow,et al.  Evolution of direct reciprocity under uncertainty can explain human generosity in one-shot encounters , 2011, Proceedings of the National Academy of Sciences.

[5]  Serge Debernard,et al.  Resilience of a human-robot system using adjustable autonomy and human-robot collaborative control , 2009 .

[6]  Robert E. Kraut,et al.  Controlling interruptions: awareness displays and social motivation for coordination , 2004, CSCW.

[7]  David R. Schaefer,et al.  The Value of Reciprocity , 2007, Social Psychology Quarterly.

[8]  David Frohlich,et al.  MIXED INITIATIVE INTERACTION , 1991 .

[9]  Emily S. Patterson,et al.  How Unexpected Events Produce An Escalation Of Cognitive And Coordinative Demands , 2001 .

[10]  J. Weyer,et al.  Interaction of Human Actors and Non-Human Agents. A Sociological Simulation Model of Hybrid Systems , 2013 .

[11]  Frederick C. Harris,et al.  Real-time human-robot interaction underlying neurorobotic trust and intent recognition , 2012, Neural Networks.

[12]  Jürgen Streeck,et al.  Depicting by gesture , 2008 .

[13]  Ronald C. Arkin,et al.  Recognizing situations that demand trust , 2011, 2011 RO-MAN.

[14]  R Core Team,et al.  R: A language and environment for statistical computing. , 2014 .

[15]  T. Schelling,et al.  The Strategy of Conflict. , 1961 .

[16]  W. Hamilton,et al.  The evolution of cooperation. , 1984, Science.

[17]  E. Ostrom Collective action and the evolution of social norms , 2000, Journal of Economic Perspectives.

[18]  D. Woods Conflicts between Learning and Accountability in Patient Safety , 2014 .

[19]  Robin R. Murphy,et al.  Human-robot interactions during the robot-assisted urban search and rescue response at the World Trade Center , 2003, IEEE Trans. Syst. Man Cybern. Part B.

[20]  Cynthia Breazeal,et al.  Detecting the Trustworthiness of Novel Partners in Economic Exchange , 2012, Psychological science.

[21]  David Woods,et al.  Basic Patterns in How Adaptive Systems Fail , 2017 .

[22]  Shawn L. Berman,et al.  The Structure of Optimal Trust: Moral and Strategic Implications , 1999 .

[23]  Linda D. Molm,et al.  Risk and Trust in Social Exchange: An Experimental Test of a Classical Proposition , 2000, American Journal of Sociology.

[24]  John D. Lee,et al.  Trust in Automation: Designing for Appropriate Reliance , 2004, Hum. Factors.

[25]  E. Ostrom,et al.  Revisiting the commons: local lessons, global challenges. , 1999, Science.

[26]  Karlene H. Roberts,et al.  The Self-Designing High-Reliability Organization: Aircraft Carrier Flight Operations at Sea , 1987 .

[27]  Fred D. Davis,et al.  Trusting Humans and Avatars: Behavioral and Neural Evidence , 2011, ICIS.

[28]  Jean Scholtz,et al.  The Peer-to-Peer Human-Robot Interaction Project , 2005 .

[29]  Robert L. Wears,et al.  Resilience Engineering: Concepts and Precepts , 2006, Quality and Safety in Health Care.

[30]  Serge Debernard,et al.  Principles of adjustable autonomy: a framework for resilient human–machine cooperation , 2010, Cognition, Technology & Work.

[31]  Jean-Michel Hoc,et al.  Towards a cognitive approach to human-machine cooperation in dynamic situations , 2001, Int. J. Hum. Comput. Stud..

[32]  A Kirlik Modeling Strategic Behavior in Human-Automation Interaction: Why an "Aid" Can (and Should) Go Unused , 1993, Human factors.

[33]  A I Houston,et al.  Beyond the prisoner's dilemma: Toward models to discriminate among mechanisms of cooperation in nature. , 1992, Trends in ecology & evolution.

[34]  S S Stevens,et al.  HUMAN ENGINEERING FOR AN EFFECTIVE AIR-NAVIGATION AND TRAFFIC-CONTROL SYSTEM, AND APPENDIXES 1 THRU 3 , 1951 .

[35]  David Woods,et al.  Behind human error : cognitive systems, computers, and hindsight : state-of-the-art report , 1994 .

[36]  David D. Woods,et al.  Envisioning human-robot coordination in future operations , 2004, IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews).

[37]  Przemyslaw A. Lasota,et al.  Analyzing the Effects of Human-Aware Motion Planning on Close-Proximity Human–Robot Collaboration , 2015, Hum. Factors.

[38]  Clifford Nass,et al.  I'm sorry, Dave: i'm afraid i won't do that: social aspects of human-agent conflict , 2009, CHI.

[39]  Raja Parasuraman,et al.  Humans and Automation: Use, Misuse, Disuse, Abuse , 1997, Hum. Factors.

[40]  Hadley Wickham,et al.  ggplot2 - Elegant Graphics for Data Analysis (2nd Edition) , 2017 .

[41]  Justine Cassell,et al.  External manifestations of trustworthiness in the interface , 2000, CACM.

[42]  Kipling D. Williams,et al.  Dyads Can Be Groups (and Often Are) , 2010 .

[43]  Erik Hollnagel,et al.  Joint Cognitive Systems: Foundations of Cognitive Systems Engineering , 2005 .

[44]  R. Sabatelli The Social Psychology of Groups , 2000 .

[45]  付伶俐 打磨Using Language,倡导新理念 , 2014 .

[46]  David D. Woods,et al.  Four concepts for resilience and the implications for the future of resilience engineering , 2015, Reliab. Eng. Syst. Saf..

[47]  Yi Zhang,et al.  A dynamic model of interaction between reliance on automation and cooperation in multi-operator multi-automation situations , 2006 .

[48]  Christopher A. Miller,et al.  Trust and etiquette in high-criticality automated systems , 2004, CACM.

[49]  Shaul Shalvi,et al.  References and Notes Supporting Online Material Materials and Methods Som Text Figs. S1 to S16 References the Neuropeptide Oxytocin Regulates Parochial Altruism in Intergroup Conflict among Humans , 2022 .

[50]  Alan R. Wagner,et al.  Building and Maintaining Trust Between Humans and Guidance Robots in an Emergency , 2013, AAAI Spring Symposium: Trust and Autonomous Systems.