Body Movement Analysis and Recognition

In this chapter, a nonverbal way of communication for human–robot interaction by understanding human upper body gestures will be addressed. The human–robot interaction system based on a novel combination of sensors is proposed. It allows one person to interact with a humanoid social robot with natural body language. The robot can understand the meaning of human upper body gestures and express itself by using a combination of body movements, facial expressions, and verbal language. A set of 12 upper body gestures is involved for communication. Human–object interactions are also included in these gestures. The gestures can be characterized by the head, arm, and hand posture information. CyberGlove II is employed to capture the hand posture. This feature is combined with the head and arm posture information captured from Microsoft Kinect. This is a new sensor solution for human-gesture capture. Based on the body posture data, an effective and real-time human gesture recognition method is proposed. For experiments, a human body gesture dataset was built. The experimental results demonstrate the effectiveness and efficiency of the proposed approach.

[1]  R. Fisher THE USE OF MULTIPLE MEASUREMENTS IN TAXONOMIC PROBLEMS , 1936 .

[2]  Nicoletta Adamo-Villani,et al.  Two gesture recognition systems for immersive math education of the deaf , 2007, IMMERSCOM.

[3]  H. Teleb,et al.  Data glove integration with 3D virtual environments , 2012, 2012 International Conference on Systems and Informatics (ICSAI2012).

[4]  Illah R. Nourbakhsh,et al.  A survey of socially interactive robots , 2003, Robotics Auton. Syst..

[5]  J. Osborne,et al.  Sample size and subject to item ratio in principal components analysis. , 2004 .

[6]  Linda B. Smith,et al.  The dynamic lift of developmental process. , 2007, Developmental science.

[7]  Rainer Stiefelhagen,et al.  Visual recognition of pointing gestures for human-robot interaction , 2007, Image Vis. Comput..

[8]  Peter E. Hart,et al.  Nearest neighbor pattern classification , 1967, IEEE Trans. Inf. Theory.

[9]  Sebastian Thrun,et al.  A Gesture Based Interface for Human-Robot Interaction , 2000, Auton. Robots.

[10]  Daniel Thalmann,et al.  Human-virtual human interaction by upper body gesture understanding , 2013, VRST '13.

[11]  Alexander H. Waibel,et al.  Enabling Multimodal Human–Robot Interaction for the Karlsruhe Humanoid Robot , 2007, IEEE Transactions on Robotics.

[12]  Yann LeCun,et al.  Learning a similarity metric discriminatively, with application to face verification , 2005, 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05).

[13]  Nicole C. Krämer,et al.  Effects of Embodied Interface Agents and Their Gestural Activity , 2003, IVA.

[14]  Ying Wu,et al.  Robust 3D Action Recognition with Random Occupancy Patterns , 2012, ECCV.

[15]  Alexander H. Waibel,et al.  Natural human-robot interaction using speech, head pose and gestures , 2004, 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (IEEE Cat. No.04CH37566).

[16]  Zhiguo Cao,et al.  Entropic image thresholding based on GLGM histogram , 2014, Pattern Recognit. Lett..

[17]  Constantine D. Spyropoulos,et al.  HUMAN-ROBOT INTERACTION BASED ON SPOKEN NATURAL LANGUAGE DIALOGUE , 2001 .

[18]  Kilian Q. Weinberger,et al.  Distance Metric Learning for Large Margin Nearest Neighbor Classification , 2005, NIPS.

[19]  Zhiguo Cao,et al.  Type-2 fuzzy thresholding using GLSC histogram of human visual nonlinearity characteristics. , 2011, Optics express.

[20]  Zhijun Zhang,et al.  Human–Robot Interaction by Understanding Upper Body Gestures , 2014, PRESENCE: Teleoperators and Virtual Environments.

[21]  Michael A. Goodrich,et al.  Human-Robot Interaction: A Survey , 2008, Found. Trends Hum. Comput. Interact..

[22]  Jitendra Malik,et al.  Shape matching and object recognition using shape contexts , 2010, 2010 3rd International Conference on Computer Science and Information Technology.

[23]  Jianxin Wu,et al.  mCENTRIST: A Multi-Channel Feature Generation Mechanism for Scene Categorization , 2014, IEEE Transactions on Image Processing.

[24]  Andrew W. Fitzgibbon,et al.  Real-time human pose recognition in parts from single depth images , 2011, CVPR 2011.

[25]  Fabio Tesser,et al.  Interpretation of Emotional Body Language Displayed by a Humanoid Robot: A Case Study with Children , 2013, Int. J. Soc. Robotics.

[26]  J. Cassell,et al.  Nudge nudge wink wink: elements of face-to-face conversation for embodied conversational agents , 2001 .

[27]  Helman Stern,et al.  Sensors for Gesture Recognition Systems , 2012, IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews).

[28]  Magdalena D. Bugajska,et al.  Building a Multimodal Human-Robot Interface , 2001, IEEE Intell. Syst..

[29]  Lik-Kwan Shark,et al.  Immersive manipulation of virtual objects through glove-based hand gesture interaction , 2011, Virtual Reality.

[30]  Sven Behnke,et al.  The humanoid museum tour guide Robotinho , 2009, RO-MAN 2009 - The 18th IEEE International Symposium on Robot and Human Interactive Communication.

[31]  Ying Wu,et al.  Mining actionlet ensemble for action recognition with depth cameras , 2012, 2012 IEEE Conference on Computer Vision and Pattern Recognition.

[32]  I. Jolliffe Principal Component Analysis , 2002 .

[33]  Paulo Menezes,et al.  Face tracking and hand gesture recognition for human-robot interaction , 2004, IEEE International Conference on Robotics and Automation, 2004. Proceedings. ICRA '04. 2004.

[34]  Lola Cañamero,et al.  I show you how I like you - can you read it in my face? [robotics] , 2001, IEEE Trans. Syst. Man Cybern. Part A.

[35]  Meinard Müller,et al.  Motion templates for automatic classification and retrieval of motion capture data , 2006, SCA '06.

[36]  Kerstin Dautenhahn,et al.  Socially intelligent robots: dimensions of human–robot interaction , 2007, Philosophical Transactions of the Royal Society B: Biological Sciences.