Motion regeneration using motion texture and autoencoder
暂无分享,去创建一个
Motion analysis and recognition frequently suffer from noisy motion capture data not only because of systematic noises of imaging devices but also because of motion dependent non-systematic errors such as self occlusions and motion dynamics extraction failure from visual data. In this work, we propose a motion regeneration method that extracts only statistically significant and distinct characteristics of human body motion and synthesizes a new motion data. To this end, we convert 3D human body motion to 2D motion texture that is easily applicable to well-trained deep convolutional network. An autoencoder is trained with our 2D motion textures to learn only essential characteristics of human body motion in encoded space discarding systematic noises and unexpected non-systematic errors that are nothing to do with the description of particular motion. For the verification of the effectiveness of our regenerated motion, we perform motion classification test on public body motion dataset using our Long-Short Term Memory(LSTM) based method.
[1] Gang Wang,et al. NTU RGB+D: A Large Scale Dataset for 3D Human Activity Analysis , 2016, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[2] Taku Komura,et al. Learning motion manifolds with convolutional autoencoders , 2015, SIGGRAPH Asia Technical Briefs.
[3] Mohammed Bennamoun,et al. A New Representation of Skeleton Sequences for 3D Action Recognition , 2017, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).