Kinect-Based Micro-Behavior Sensing System for Learning the Smart Assistance with Human Subjects Inside Their Homes

The automatic sensing and understanding of the contexts are important to make the appropriate decision in Smart Systems. Especially, understanding and tracing human activities and behaviors are useful elements that enable the system to support or to imitate them. There are many studies that identify the activity or behavior with using the various sensors, but many of them could only identify rough activities, and some studies badly infringe on privacy. In this study, we propose a micro-behavior sensing system with Kinect sensor. To reduce the invasion of privacy with Camera device like Kinect sensor, our system identifies the micro-behavior by using features only extracted from the skeleton data. We deployed our system to actual Smart Home in Nara Institute of Science and Technology Japan. Then we conducted an experiment during 15 days to collect actual daily activities and confirmed the identification accuracy of the micro-behaviors related to the cooking activity, such as picking up a seasoning, cutting, mixing, washing etc. In the paper, 5 main micro-behaviors are considered. The achieved accuracy in the classification is 78%.

[1]  Yangsheng Xu,et al.  Online, interactive learning of gestures for human/robot interfaces , 1996, Proceedings of IEEE International Conference on Robotics and Automation.

[2]  Barbara Caputo,et al.  Recognizing human actions: a local SVM approach , 2004, Proceedings of the 17th International Conference on Pattern Recognition, 2004. ICPR 2004..

[3]  Sebastian Thrun,et al.  Toward a Framework for Human-Robot Interaction , 2004, Hum. Comput. Interact..

[4]  Christian Wolf,et al.  Pose-conditioned Spatio-Temporal Attention for Human Action Recognition , 2017, ArXiv.

[5]  Mohammed Bennamoun,et al.  A New Representation of Skeleton Sequences for 3D Action Recognition , 2017, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[6]  Francesco G. B. De Natale,et al.  AUSILIA: Assisted unit for simulating independent living activities , 2016, 2016 IEEE International Smart Cities Conference (ISC2).

[7]  Ronald Poppe,et al.  A survey on vision-based human action recognition , 2010, Image Vis. Comput..

[8]  Andrew W. Fitzgibbon,et al.  Real-time human pose recognition in parts from single depth images , 2011, CVPR 2011.

[9]  Yongcai Wang,et al.  Accurate Location Stream Tracking and Recognition Using an Ultrasound Localization System , 2011 .

[10]  Mariolino De Cecco,et al.  Automatic graph based spatiotemporal extrinsic calibration of multiple Kinect V2 ToF cameras , 2017, Robotics Auton. Syst..

[11]  Lei Wu,et al.  Effective Active Skeleton Representation for Low Latency Human Action Recognition , 2016, IEEE Transactions on Multimedia.

[12]  Xiaohui Xie,et al.  Co-Occurrence Feature Learning for Skeleton Based Action Recognition Using Regularized Deep LSTM Networks , 2016, AAAI.