Reshaping human intention in Human-Robot Interactions by robot moves

Abstract This paper outlines the methodology and experiments associated with the reshaping of human intentions based on robot movements within Human-Robot Interactions (HRIs). Although studies on estimating human intentions are well studied in the literature, reshaping intentions through robot-initiated interactions is a new significant branching in the field of HRI. In this paper, we analyze how estimated human intentions can intentionally change through cooperation with mobile robots in real Human-Robot environments. This paper proposes an intention-reshaping system that includes either the Observable Operator Models (OOMs) or Hidden Markov Models (HMMs) to estimate human intention and decide which moves a robot should perform to reshape previously estimated human intentions into desired ones. At the low level, the system needs to track the locations of all mobile agents using cameras. We test our system on videos taken in a real HRI environment that has been developed as our experimental setup. The results show that OOMs are faster than HMMs and both models give correct decisions for testing sequences.

[1]  W Prinz,et al.  Recognition of self-generated actions from kinematic displays of drawing. , 2001, Journal of experimental psychology. Human perception and performance.

[2]  D. Dennett The Intentional Stance. , 1987 .

[3]  Michael E. Bratman,et al.  Intention, Plans, and Practical Reason , 1991 .

[4]  R. Passingham,et al.  Brain Mechanisms for Inferring Deceit in the Actions of Others , 2004, The Journal of Neuroscience.

[5]  Takashi Imamura,et al.  ESTIMATION OF FACIAL EXPRESSION FROM ITS CHANGE IN TIME , 2011 .

[6]  P. Sheeran,et al.  Does changing behavioral intentions engender behavior change? A meta-analysis of the experimental evidence. , 2006, Psychological bulletin.

[7]  Steve Renals,et al.  Dynamic Bayesian networks for meeting structuring , 2004, 2004 IEEE International Conference on Acoustics, Speech, and Signal Processing.

[8]  M. Shiffrar,et al.  Detecting deception in a bluffing body: The role of expertise , 2009, Psychonomic bulletin & review.

[9]  Peter E Keller,et al.  Self‐recognition in the Perception of Actions Performed in Synchrony with Music , 2009, Annals of the New York Academy of Sciences.

[10]  Lawrence R. Rabiner,et al.  A tutorial on hidden Markov models and selected applications in speech recognition , 1989, Proc. IEEE.

[11]  Akif DURDU,et al.  Morphing Estimated Human Intention via Human-Robot Interactions , 2011 .

[12]  Samy Bengio,et al.  Modeling Individual and Group Actions in Meetings: A Two-Layer HMM Framework , 2004, 2004 Conference on Computer Vision and Pattern Recognition Workshop.

[13]  S. Lea,et al.  Perception of Emotion from Dynamic Point-Light Displays Represented in Dance , 1996, Perception.

[14]  M. Shiffrar,et al.  Recognizing people from their movement. , 2005, Journal of experimental psychology. Human perception and performance.

[15]  Ilona Spanczér Observable Operator Models , 2016 .

[16]  Monica N. Nicolescu,et al.  Understanding human intentions via Hidden Markov Models in autonomous mobile robots , 2008, 2008 3rd ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[17]  S. Runeson,et al.  Visual perception of lifted weight. , 1981, Journal of experimental psychology. Human perception and performance.

[18]  Akif Durdu,et al.  Robotic Hardware and Software Integration for Changing Human Intentions , 2012 .

[19]  Uwe D. Hanebeck,et al.  Tractable probabilistic models for intention recognition based on expert knowledge , 2007, 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[20]  K.A. Tahboub,et al.  Compliant Human-Robot Cooperation Based on Intention Recognition , 2005, Proceedings of the 2005 IEEE International Symposium on, Mediterrean Conference on Control and Automation Intelligent Control, 2005..

[21]  T. J. Clarke,et al.  The Perception of Emotion from Body Movement in Point-Light Displays of Interpersonal Dialogue , 2005, Perception.

[22]  Sarah Schmidt,et al.  Pedestrians at the kerb – Recognising the action intentions of humans , 2009 .

[23]  M. Shiffrar,et al.  The visual analysis of emotional actions , 2006, Social neuroscience.

[24]  Karim A. Tahboub,et al.  Journal of Intelligent and Robotic Systems (2005) DOI: 10.1007/s10846-005-9018-0 Intelligent Human–Machine Interaction Based on Dynamic Bayesian Networks Probabilistic Intention Recognition , 2004 .

[25]  Yasushi Nakauchi,et al.  Vivid room: human intention detection and activity support environment for ubiquitous autonomy , 2003, Proceedings 2003 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2003) (Cat. No.03CH37453).

[26]  Herbert Jaeger,et al.  Efficient Estimation of OOMs , 2005, NIPS.

[27]  F. Lacquaniti,et al.  Kinematic cues and recognition of self-generated actions , 2007, Experimental Brain Research.

[28]  L. Baum,et al.  A Maximization Technique Occurring in the Statistical Analysis of Probabilistic Functions of Markov Chains , 1970 .

[29]  Claire L. Roether,et al.  Critical features for the perception of emotion from gait. , 2009, Journal of vision.

[30]  Thiagalingam Kirubarajan,et al.  Tracking the mode of operation of multi-function radars , 2006, 2006 IEEE Conference on Radar.

[31]  Robert P. Goldman,et al.  A Bayesian Model of Plan Recognition , 1993, Artif. Intell..

[32]  Cristina Becchio,et al.  Inferring intentions from biological motion: A stimulus set of point-light communicative interactions , 2010, Behavior research methods.

[33]  Danica Kragic,et al.  Layered HMM for Motion Intention Recognition , 2006, 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[34]  Yasushi Nakauchi,et al.  Human intention detection and activity support system for ubiquitous autonomy , 2003, Proceedings 2003 IEEE International Symposium on Computational Intelligence in Robotics and Automation. Computational Intelligence in Robotics and Automation for the New Millennium (Cat. No.03EX694).

[35]  G. Rizzolatti,et al.  Hearing Sounds, Understanding Actions: Action Representation in Mirror Neurons , 2002, Science.

[36]  Peter E. Keller,et al.  Cues for self-recognition in point-light displays of actions performed in synchrony with music , 2010, Consciousness and Cognition.

[37]  A. Meltzoff Understanding the Intentions of Others: Re-Enactment of Intended Acts by 18-Month-Old Children. , 1995, Developmental psychology.

[38]  Andrew J. Viterbi,et al.  Error bounds for convolutional codes and an asymptotically optimum decoding algorithm , 1967, IEEE Trans. Inf. Theory.

[39]  Yangsheng Xu,et al.  Modeling human actions from learning , 2004, 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (IEEE Cat. No.04CH37566).

[40]  J. Cutting,et al.  Recognizing friends by their walk: Gait perception without familiarity cues , 1977 .

[41]  Akif Durdu,et al.  Estimating and reshaping human intention via human{robot interaction , 2016 .

[42]  Akif Durdu,et al.  Observable operator models for reshaping estimated human intention by robot moves in human-robot interactions , 2012, 2012 International Symposium on Innovations in Intelligent Systems and Applications.

[43]  S. Russel and P. Norvig,et al.  “Artificial Intelligence – A Modern Approach”, Second Edition, Pearson Education, 2003. , 2015 .

[44]  Akira Ito,et al.  Reactive movements of non-humanoid robots cause intention attribution in humans , 2007, 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems.