AutoBAP: Automatic Coding of Body Action and Posture Units from Wearable Sensors

Manual annotation of human body movement is an integral part of research on non-verbal communication and computational behaviour analysis but also a very time-consuming and tedious task. In this paper we present AutoBAP, a system that automates the coding of bodily expressions according to the body action and posture (BAP) coding scheme. Our system takes continuous body motion and gaze behaviour data as its input. The data is recorded using a full body motion tracking suit and a wearable eye tracker. From the data our system automatically generates a labelled XML file that can be visualised and edited with off-the-shelf video annotation tools. We evaluate our system in a laboratory-based user study with six participants performing scripted sequences of 184 actions. Results from the user study show that our prototype system is able to annotate 172 out of the 274 labels of the full BAP coding scheme with good agreement with a manual annotator (Cohen's kappa > 0.6).

[1]  Xin Wang,et al.  Facial expression anlysis using eye gaze information , 2011, 2011 IEEE International Conference on Computational Intelligence for Measurement Systems and Applications (CIMSA) Proceedings.

[2]  Michael Kipp,et al.  An Exchange Format for Multimodal Annotations , 2008, LREC.

[3]  Frank Y. Shih,et al.  Recognizing facial action units using independent component analysis and support vector machine , 2006, Pattern Recognit..

[4]  Jacob Cohen A Coefficient of Agreement for Nominal Scales , 1960 .

[5]  K. Scherer,et al.  The Body Action and Posture Coding System (BAP): Development and Reliability , 2012 .

[6]  Gwen Littlewort,et al.  Automatic coding of facial expressions displayed during posed and genuine pain , 2009, Image Vis. Comput..

[7]  Beat Fasel,et al.  Automati Fa ial Expression Analysis: A Survey , 1999 .

[8]  Ian H. Witten,et al.  The WEKA data mining software: an update , 2009, SKDD.

[9]  M. V. Lamar,et al.  Recognizing facial actions using Gabor wavelets with neutral face average difference , 2004, Sixth IEEE International Conference on Automatic Face and Gesture Recognition, 2004. Proceedings..

[10]  Pawel Dybala,et al.  CAO: A Fully Automatic Emoticon Analysis System Based on Theory of Kinesics , 2010, IEEE Transactions on Affective Computing.

[11]  F. Pollick,et al.  The Role of Velocity in Affect Discrimination , 2001 .

[12]  Hans-Werner Gellersen,et al.  MotionMA: motion modelling and analysis by demonstration , 2013, CHI.

[13]  Takeo Kanade,et al.  Recognizing Action Units for Facial Expression Analysis , 2001, IEEE Trans. Pattern Anal. Mach. Intell..

[14]  K. Scherer,et al.  Introducing the Geneva Multimodal Emotion Portrayal (GEMEP) corpus , 2010 .

[15]  Michael Kipp,et al.  Annotation of Human Gesture using 3D Skeleton Controls , 2010, LREC.

[16]  Fakhri Karray,et al.  Survey on speech emotion recognition: Features, classification schemes, and databases , 2011, Pattern Recognit..

[17]  P. Ekman,et al.  Facial action coding system: a technique for the measurement of facial movement , 1978 .

[18]  Andrea Kleinsmith,et al.  Affective Body Expression Perception and Recognition: A Survey , 2013, IEEE Transactions on Affective Computing.

[19]  Ross Bencina,et al.  reacTIVision: a computer-vision framework for table-based tangible interaction , 2007, TEI.

[20]  Rudolf von Laban,et al.  Laban's principles of dance and movement notation , 1975 .

[21]  J. Fleiss,et al.  The measurement of interrater agreement , 2004 .

[22]  Michael Kipp,et al.  ANVIL - a generic annotation tool for multimodal dialogue , 2001, INTERSPEECH.

[23]  R. Gur,et al.  Automated Facial Action Coding System for dynamic analysis of facial expressions in neuropsychiatric disorders , 2011, Journal of Neuroscience Methods.

[24]  R. Birdwhistell Kinesics and Context: Essays on Body Motion Communication , 1971 .

[25]  J. Fleiss Statistical methods for rates and proportions , 1974 .

[26]  Balakrishnan Ramadoss,et al.  SEMI-AUTOMATED ANNOTATION AND RETRIEVAL OF DANCE MEDIA OBJECTS , 2007, Cybern. Syst..

[27]  Katherine B. Martin,et al.  Facial Action Coding System , 2015 .