A Sensory-Motor Language for Human Activity Understanding

We have empirically discovered that the space of human actions has a linguistic framework. This is a sensory-motor space consisting of the evolution of the joint angles of the human body in movement. The space of human activity has its own phonemes, morphemes, and sentences. We present a human activity language (HAL) for symbolic non-arbitrary representation of visual and motor information. In phonology, we define atomic segments (kinetemes) that are used to compose human activity. We introduce the concept of a kinetological system and propose five basic properties for such a system: compactness, view-invariance, reproducibility, selectivity, and reconstructivity. In morphology, we extend sequential language learning to incorporate associative learning with our parallel learning approach. Parallel learning is effective in identifying the kinetemes and active joints in a particular action. In syntax, we suggest four lexical categories for our human activity language (noun, verb, adjective, and adverb). These categories are combined into sentences through syntax for human movement

[1]  Gheorghe Paun,et al.  On the synchronization in parallel communicating grammar systems , 1993, Acta Informatica.

[2]  K. Uehara,et al.  Extraction of primitive motion and discovery of association rules from motion data , 2001, Proceedings 10th IEEE International Workshop on Robot and Human Interactive Communication. ROMAN 2001 (Cat. No.01TH8591).

[3]  Jernej Barbic,et al.  Segmenting Motion Capture Data into Distinct Behaviors , 2004, Graphics Interface.

[4]  Kazuhito Yokoi,et al.  Imitating human dance motions through motion structure analysis , 2002, IEEE/RSJ International Conference on Intelligent Robots and Systems.

[5]  Martin A. Giese,et al.  On the Representation, Learning and Transfer of Spatio-Temporal Movement Characteristics , 2003, Int. J. Humanoid Robotics.

[6]  Stefan Schaal,et al.  Is imitation learning the route to humanoid robots? , 1999, Trends in Cognitive Sciences.

[7]  Kenji Doya,et al.  Symbolization and Imitation Learning of Motion Sequence Using Competitive Modules , 2006 .

[8]  G. Rizzolatti,et al.  Action recognition in the premotor cortex. , 1996, Brain : a journal of neurology.

[9]  K. Amunts,et al.  Broca's region: from action to language. , 2005, Physiology.

[10]  Kazuhito Yokoi,et al.  Planning walking patterns for a biped robot , 2001, IEEE Trans. Robotics Autom..

[11]  Masayuki Inaba,et al.  Learning by watching: extracting reusable task knowledge from visual observation of human performance , 1994, IEEE Trans. Robotics Autom..

[12]  Yiannis Aloimonos,et al.  Understanding visuo‐motor primitives for motion synthesis and analysis , 2006, Comput. Animat. Virtual Worlds.

[13]  Michael P. Kaschak,et al.  Grounding language in action , 2002, Psychonomic bulletin & review.

[14]  G. ten Bruggencate Muscles and their neural control. , 1986, Applied neurophysiology.

[15]  Yoshihiko Nakamura,et al.  Acquiring Motion Elements for Bidirectional Computation of Motion Recognition and Generation , 2002, ISER.

[16]  Maja J. Mataric,et al.  Automated Derivation of Primitives for Movement Classification , 2000, Auton. Robots.

[17]  Stefan Schaal,et al.  Robot Learning From Demonstration , 1997, ICML.

[18]  Maja J. Mataric,et al.  Automated derivation of behavior vocabularies for autonomous humanoid motion , 2003, AAMAS '03.

[19]  Christopher G. Atkeson,et al.  Adapting human motion for the control of a humanoid robot , 2002, Proceedings 2002 IEEE International Conference on Robotics and Automation (Cat. No.02CH37292).

[20]  Ales Ude,et al.  Automatic Generation of Kinematic Models for the Conversion of Human Motion Capture Data into Humanoid Robot Motion , 2000 .

[21]  Takashi Minato,et al.  Generating natural motion in an android by mapping human motion , 2005, 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems.