Velocity-Based Multiple Change-Point Inference for Unsupervised Segmentation of Human Movement Behavior

In order to transfer complex human behavior to a robot, segmentation methods are needed which are able to detect central movement patterns that can be combined to generate a wide range of behaviors. We propose an algorithm that segments human movements into behavior building blocks in a fully automatic way, called velocity-based Multiple Change-point Inference (vMCI). Based on characteristic bell-shaped velocity patterns that can be found in point-to-point arm movements, the algorithm infers segment borders using Bayesian inference. Different segment lengths and variations in the movement execution can be handled. Moreover, the number of segments the movement is composed of need not be known in advance. Several experiments are performed on synthetic and motion capturing data of human movements to compare vMCI with other techniques for unsupervised segmentation. The results show that vMCI is able to detect segment borders even in noisy data and in demonstrations with smooth transitions between segments.

[1]  Frank Kirchner,et al.  A Connectionist Architecture for Learning to Play a Simulated Brio Labyrinth Game , 2007, KI.

[2]  Avi Karni,et al.  A shift in task routines during the learning of a motor skill: group-averaged data may mask critical phases in the individuals' acquisition of skilled performance. , 2008, Journal of experimental psychology. Learning, memory, and cognition.

[3]  Maja J. Mataric,et al.  Automated derivation of behavior vocabularies for autonomous humanoid motion , 2003, AAMAS '03.

[4]  Brett Browning,et al.  A survey of robot learning from demonstration , 2009, Robotics Auton. Syst..

[5]  P. Morasso Spatial control of arm movements , 2004, Experimental Brain Research.

[6]  Jake K. Aggarwal,et al.  Human Motion Analysis: A Review , 1999, Comput. Vis. Image Underst..

[7]  Dana Kulic,et al.  Incremental learning of full body motion primitives and their sequencing through human motion observation , 2012, Int. J. Robotics Res..

[8]  Jun Nakanishi,et al.  Dynamical Movement Primitives: Learning Attractor Models for Motor Behaviors , 2013, Neural Computation.

[9]  P. Fearnhead,et al.  On‐line inference for multiple changepoint problems , 2007 .

[10]  O. Hikosaka,et al.  Chunking during human visuomotor sequence learning , 2003, Experimental Brain Research.

[11]  Scott Kuindersma,et al.  Robot learning from demonstration by constructing skill trees , 2012, Int. J. Robotics Res..

[12]  Scott Niekum,et al.  Learning and generalization of complex tasks from unstructured demonstrations , 2012, 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[13]  Steven Lemm,et al.  A Dynamic HMM for On-line Segmentation of Sequential Data , 2001, NIPS.

[14]  Radford M. Neal Pattern Recognition and Machine Learning , 2007, Technometrics.

[15]  Jan Peters,et al.  Movement extraction by detecting dynamics switches and repetitions , 2010, NIPS.

[16]  Oliver Kroemer,et al.  Learning to select and generalize striking movements in robot table tennis , 2012, AAAI Fall Symposium: Robots Learning Interactively from Human Teachers.

[17]  Michael I. Jordan,et al.  Sharing Features among Dynamical Systems with Beta Processes , 2009, NIPS.

[18]  Maja J. Mataric,et al.  Automated Derivation of Primitives for Movement Classification , 2000, Auton. Robots.

[19]  A. Graybiel The Basal Ganglia and Chunking of Action Repertoires , 1998, Neurobiology of Learning and Memory.

[20]  Gérard G. Medioni,et al.  Kernelized Temporal Cut for Online Temporal Segmentation and Recognition , 2012, ECCV.