Activity Simulation from Signals
暂无分享,去创建一个
Sensor-based human activity recognition technology has been used for estimating human action based on the sensor data. In this paper, we propose a new paradigm to render the human activity on a screen instead of classifying the activity among the activity labels. We could built this mockup of a simulator, combining our previous translation tool between signals [2] with the motion rendering systems [3]. We faced two problems which decrease the simulation ability a lot. We proposed two algorithms to increase the performance of this simulator in this preliminary work.
[1] Sozo Inoue,et al. Activity Recognition: Translation across Sensor Modalities Using Deep Learning , 2018, UbiComp/ISWC Adjunct.
[2] Takaaki Shiratori,et al. FrankMocap: Fast Monocular 3D Hand and Body Motion Capture by Regression and Integration , 2020, ArXiv.
[3] Ruzena Bajcsy,et al. Berkeley MHAD: A comprehensive Multimodal Human Action Database , 2013, 2013 IEEE Workshop on Applications of Computer Vision (WACV).