Competitive Physical Human-Robot Game Play

While competitive games have been studied extensively in the AI community for benchmarking purposes, there has only been limited discussion of human interaction with embodied agents under competitive settings. In this work, we aim to motivate research in competitive human-robot interaction (competitive-HRI) by discussing how human users can benefit from robot competitors. We then examine the concepts from game AI that we can adopt for competitive-HRI. Based on these discussions, we propose a robotic system that is designed to support future competitive-HRI research. A human-robot fencing game is also proposed to evaluate a robot's capability in competitive-HRI scenarios. Finally, we present the initial experimental results and discuss possible future research directions.

[1]  Katie Salen,et al.  Rules of play: game design fundamentals , 2003 .

[2]  Jonathan Schaeffer,et al.  Checkers Is Solved , 2007, Science.

[3]  Maja J. Mataric,et al.  Robot exercise instructor: A socially assistive robot system to monitor and encourage physical exercise for the elderly , 2010, 19th International Symposium in Robot and Human Interactive Communication.

[4]  Demis Hassabis,et al.  Mastering the game of Go without human knowledge , 2017, Nature.

[5]  S. Hidi,et al.  The Four-Phase Model of Interest Development , 2006 .

[6]  Yi Wu,et al.  Multi-Agent Actor-Critic for Mixed Cooperative-Competitive Environments , 2017, NIPS.

[7]  Elizabeth O. Hayward,et al.  The impact of individual, competitive, and collaborative mathematics game play on learning, performance, and motivation , 2013 .

[8]  Brian Scassellati,et al.  No fair!! An interaction with a cheating robot , 2010, 2010 5th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[9]  Guy Hoffman,et al.  Monetary-Incentive Competition Between Humans and Robots: Experimental Results , 2019, 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[10]  Bilge Mutlu,et al.  Perceptions of ASIMO: an exploration on co-operation and competition with humans and humanoid robots , 2006, HRI '06.

[11]  George-Christopher Vosniakos,et al.  Design of a virtual reality training system for human–robot collaboration in manufacturing tasks , 2015, International Journal on Interactive Design and Manufacturing (IJIDeM).

[12]  Michael L. Littman,et al.  Markov Games as a Framework for Multi-Agent Reinforcement Learning , 1994, ICML.

[13]  Jordan L. Boyd-Graber,et al.  Opponent Modeling in Deep Reinforcement Learning , 2016, ICML.

[14]  Shimon Whiteson,et al.  Learning with Opponent-Learning Awareness , 2017, AAMAS.

[15]  Shimon Whiteson,et al.  DiCE: The Infinitely Differentiable Monte-Carlo Estimator , 2018, ICML.

[16]  Yuval Tassa,et al.  Continuous control with deep reinforcement learning , 2015, ICLR.

[17]  Dmitry Berenson,et al.  Unsupervised early prediction of human reaching for human–robot collaboration in shared workspaces , 2018, Auton. Robots.

[18]  Li Rui,et al.  Comparing Human-Robot Proxemics between Virtual Reality and the Real World , 2018 .

[19]  Abhinav Gupta,et al.  Robust Adversarial Reinforcement Learning , 2017, ICML.

[20]  Risto Miikkulainen,et al.  Competitive Coevolution through Evolutionary Complexification , 2011, J. Artif. Intell. Res..

[21]  Alec Radford,et al.  Proximal Policy Optimization Algorithms , 2017, ArXiv.

[22]  Ingo Lütkebohle,et al.  A robot as fitness companion: Towards an interactive action-based motivation model , 2014, The 23rd IEEE International Symposium on Robot and Human Interactive Communication.

[23]  Dorian Kodelja,et al.  Multiagent cooperation and competition with deep reinforcement learning , 2015, PloS one.

[24]  Demis Hassabis,et al.  A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play , 2018, Science.

[25]  Min Chen,et al.  Trust-Aware Decision Making for Human-Robot Collaboration , 2018, ACM Transactions on Human-Robot Interaction.

[26]  Murray Campbell,et al.  Deep Blue , 2002, Artif. Intell..

[27]  Siddhartha S. Srinivasa,et al.  Is More Autonomy Always Better?: Exploring Preferences of Users with Mobility Impairments in Robot-assisted Feeding , 2020, HRI.

[28]  Kosuke Sato,et al.  Development of a block machine for volleyball attack training , 2017, 2017 IEEE International Conference on Robotics and Automation (ICRA).

[29]  W. Daniel Hillis,et al.  Co-evolving parasites improve simulated evolution as an optimization procedure , 1990 .

[30]  Jakub W. Pachocki,et al.  Emergent Complexity via Multi-Agent Competition , 2017, ICLR.

[31]  Christos H. Papadimitriou,et al.  Cycles in adversarial regularized learning , 2017, SODA.

[32]  S. Shankar Sastry,et al.  On Finding Local Nash Equilibria (and Only Local Nash Equilibria) in Zero-Sum Games , 2019, 1901.00838.

[33]  Charles C. Kemp,et al.  Human-Robot Interaction for Cooperative Manipulation: Handing Objects to One Another , 2007, RO-MAN 2007 - The 16th IEEE International Symposium on Robot and Human Interactive Communication.

[34]  A. Hackney,et al.  Competition effects on physiological responses to exercise: performance, cardiorespiratory and hormonal factors. , 2010, Acta physiologica Hungarica.

[35]  N. Kerr,et al.  Cyber Buddy Is Better than No Buddy: A Test of the Köhler Motivation Effect in Exergames. , 2014, Games for health journal.