Robotic Hardware and Software Integration for Changing Human Intentions

Estimating and reshaping human intentions are among the most significant topics of research in the field of human-robot interaction. This chapter provides an overview of intention estimation literature on human-robot interaction, and introduces an approach on how robots can voluntarily reshape estimated intentions. The reshaping of the human intention is achieved by the robots moving in certain directions that have been a priori observed from the interactions of humans with the objects in the scene. Being among the only few studies on intention reshaping, the authors of this chapter exploit spatial information by learning a Hidden Markov Model (HMM) of motion, which is tailored for intelligent robotic interaction. The algorithmic design consists of two phases. At first, the approach detects and tracks human to estimate the current intention. Later, this information is used by autonomous robots that interact with detected human to change the estimated intention. In the tracking and intention estimation phase, postures and locations of the human are monitored by applying low-level video processing methods. In the latter phase, learned HMM models are used to reshape the estimated human intention. This two-phase system is tested on video frames taken from a real human-robot environment. The results obtained using the proposed approach shows promising performance in reshaping of detected intentions. DOI: 10.4018/978-1-4666-0176-5.ch013

[1]  Dong-Soo Kwon,et al.  Recognizing human intentional actions from the relative movements between human and robot , 2009, RO-MAN 2009 - The 18th IEEE International Symposium on Robot and Human Interactive Communication.

[2]  Samy Bengio,et al.  Modeling Individual and Group Actions in Meetings: A Two-Layer HMM Framework , 2004, 2004 Conference on Computer Vision and Pattern Recognition Workshop.

[3]  Michael E. Bratman,et al.  Faces of Intention: Contents , 1999 .

[4]  S. Lea,et al.  Perception of Emotion from Dynamic Point-Light Displays Represented in Dance , 1996, Perception.

[5]  G. Rizzolatti,et al.  Hearing Sounds, Understanding Actions: Action Representation in Mirror Neurons , 2002, Science.

[6]  Peter E. Keller,et al.  Cues for self-recognition in point-light displays of actions performed in synchrony with music , 2010, Consciousness and Cognition.

[7]  A. Meltzoff Understanding the Intentions of Others: Re-Enactment of Intended Acts by 18-Month-Old Children. , 1995, Developmental psychology.

[8]  R. Passingham,et al.  Brain Mechanisms for Inferring Deceit in the Actions of Others , 2004, The Journal of Neuroscience.

[9]  Steve Renals,et al.  Dynamic Bayesian networks for meeting structuring , 2004, 2004 IEEE International Conference on Acoustics, Speech, and Signal Processing.

[10]  B. Gelder,et al.  Why bodies? Twelve reasons for including bodily expressions in affective neuroscience , 2009, Philosophical Transactions of the Royal Society B: Biological Sciences.

[11]  M. Shiffrar,et al.  The visual analysis of emotional actions , 2006, Social neuroscience.

[12]  Karim A. Tahboub,et al.  Journal of Intelligent and Robotic Systems (2005) DOI: 10.1007/s10846-005-9018-0 Intelligent Human–Machine Interaction Based on Dynamic Bayesian Networks Probabilistic Intention Recognition , 2004 .

[13]  J. Mazziotta,et al.  Grasping the Intentions of Others with One's Own Mirror Neuron System , 2005, PLoS biology.

[14]  Akira Ito,et al.  Reactive movements of non-humanoid robots cause intention attribution in humans , 2007, 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[15]  Masamichi Shimosaka,et al.  Hierarchical recognition of daily human actions based on Continuous Hidden Markov Models , 2004, Sixth IEEE International Conference on Automatic Face and Gesture Recognition, 2004. Proceedings..

[16]  Takashi Omori,et al.  Modeling of human intention estimation process in social interaction scene , 2010, International Conference on Fuzzy Systems.

[17]  Michael P. Wellman,et al.  Probabilistic grammars for plan recognition , 1999 .

[18]  L. Baum,et al.  A Maximization Technique Occurring in the Statistical Analysis of Probabilistic Functions of Markov Chains , 1970 .

[19]  Monica N. Nicolescu,et al.  Understanding human intentions via Hidden Markov Models in autonomous mobile robots , 2008, 2008 3rd ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[20]  S. Runeson,et al.  Visual perception of lifted weight. , 1981, Journal of experimental psychology. Human perception and performance.

[21]  Yasushi Nakauchi,et al.  Human intention detection and activity support system for ubiquitous autonomy , 2003, Proceedings 2003 IEEE International Symposium on Computational Intelligence in Robotics and Automation. Computational Intelligence in Robotics and Automation for the New Millennium (Cat. No.03EX694).

[22]  Alex Pentland,et al.  Face recognition using eigenfaces , 1991, Proceedings. 1991 IEEE Computer Society Conference on Computer Vision and Pattern Recognition.

[23]  F. Lacquaniti,et al.  Kinematic cues and recognition of self-generated actions , 2007, Experimental Brain Research.

[24]  Sarah Schmidt,et al.  Pedestrians at the kerb – Recognising the action intentions of humans , 2009 .

[25]  Claire L. Roether,et al.  Critical features for the perception of emotion from gait. , 2009, Journal of vision.

[26]  Yasushi Nakauchi,et al.  Vivid room: human intention detection and activity support environment for ubiquitous autonomy , 2003, Proceedings 2003 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2003) (Cat. No.03CH37453).

[27]  T. J. Clarke,et al.  The Perception of Emotion from Body Movement in Point-Light Displays of Interpersonal Dialogue , 2005, Perception.

[28]  W. Eric L. Grimson,et al.  Using adaptive tracking to classify and monitor activities in a site , 1998, Proceedings. 1998 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No.98CB36231).

[29]  Robert P. Goldman,et al.  A Bayesian Model of Plan Recognition , 1993, Artif. Intell..

[30]  Andrew J. Viterbi,et al.  Error bounds for convolutional codes and an asymptotically optimum decoding algorithm , 1967, IEEE Trans. Inf. Theory.

[31]  D. Rubin,et al.  Maximum likelihood from incomplete data via the EM - algorithm plus discussions on the paper , 1977 .

[32]  Yangsheng Xu,et al.  Modeling human actions from learning , 2004, 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (IEEE Cat. No.04CH37566).

[33]  Henry A. Kautz,et al.  Generalized Plan Recognition , 1986, AAAI.

[34]  W Prinz,et al.  Recognition of self-generated actions from kinematic displays of drawing. , 2001, Journal of experimental psychology. Human perception and performance.

[35]  Lawrence R. Rabiner,et al.  A tutorial on hidden Markov models and selected applications in speech recognition , 1989, Proc. IEEE.

[36]  Thomas G. Dietterich Machine Learning for Sequential Data: A Review , 2002, SSPR/SPR.

[37]  Gregory D. Hager,et al.  Automatic Detection and Segmentation of Robot-Assisted Surgical Motions , 2005, MICCAI.

[38]  P. Sheeran,et al.  Does changing behavioral intentions engender behavior change? A meta-analysis of the experimental evidence. , 2006, Psychological bulletin.

[39]  Kohei Arai,et al.  Eye Based HCI with Moving Keyboard for Reducing Fatigue Effects , 2011, 2011 Eighth International Conference on Information Technology: New Generations.

[40]  Andreas Wendemuth,et al.  Heading toward to the natural way of human-machine interaction: the nimitek project , 2009, 2009 IEEE International Conference on Multimedia and Expo.

[41]  Daniel K. E. Aarno,et al.  Intention Recognition in Human Machine Collaborative Systems , 2007 .

[42]  Roel M. Willems,et al.  Complementary Systems for Understanding Action Intentions , 2008, Current Biology.

[43]  M. Shiffrar,et al.  Recognizing people from their movement. , 2005, Journal of experimental psychology. Human perception and performance.

[44]  Maleeha Kiran,et al.  Clustering techniques for human posture recognition: K-means, FCM and SOM , 2009 .

[45]  Uwe D. Hanebeck,et al.  Tractable probabilistic models for intention recognition based on expert knowledge , 2007, 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[46]  Ying Huang,et al.  A non-contact eye-gaze tracking system for human computer interaction , 2007, 2007 International Conference on Wavelet Analysis and Pattern Recognition.

[47]  P. Haggard,et al.  Voluntary action and conscious awareness , 2002, Nature Neuroscience.

[48]  K.A. Tahboub,et al.  Compliant Human-Robot Cooperation Based on Intention Recognition , 2005, Proceedings of the 2005 IEEE International Symposium on, Mediterrean Conference on Control and Automation Intelligent Control, 2005..

[49]  G. Rizzolatti,et al.  I Know What You Are Doing A Neurophysiological Study , 2001, Neuron.

[50]  J. Cutting,et al.  Recognizing friends by their walk: Gait perception without familiarity cues , 1977 .

[51]  Masamichi Shimosaka,et al.  Marginalized Bags of Vectors Kernels on Switching Linear Dynamics for Online Action Recognition , 2005, Proceedings of the 2005 IEEE International Conference on Robotics and Automation.

[52]  Peter E Keller,et al.  Self‐recognition in the Perception of Actions Performed in Synchrony with Music , 2009, Annals of the New York Academy of Sciences.

[53]  M. Shiffrar,et al.  Detecting deception in a bluffing body: The role of expertise , 2009, Psychonomic bulletin & review.

[54]  Peter Norvig,et al.  Artificial Intelligence: A Modern Approach , 1995 .

[55]  G. Rizzolatti,et al.  Parietal Lobe: From Action Organization to Intention Understanding , 2005, Science.

[56]  Cristina Becchio,et al.  Inferring intentions from biological motion: A stimulus set of point-light communicative interactions , 2010, Behavior research methods.

[57]  Danica Kragic,et al.  Layered HMM for Motion Intention Recognition , 2006, 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[58]  Akira Ito,et al.  Can a robot deceive humans , 2010, HRI 2010.