Human daily activity recognition with joints plus body features representation using Kinect sensor

Human activity recognition has been studied actively from decades using a sequence of 2D images/video. With the development of depth sensors, new opportunities arise to improve and advance this field. This study presents a depth imaging activity recognition system to monitor and recognize daily activities of the human without attaching optical markers or motion sensors. In this paper, we proposed a new feature representation and extraction method using a sequence of depth silhouettes. Particularly, we first extract the depth silhouette by removing background from noisy effects and then extract the joints plus body features as skin color detection from joint information and multi-view body shape from depth silhouettes (i.e., front and side views). We combine the joints plus body shape features to make feature vector. These features have two nice properties including invariant with respect to body shape or size and insensitive to small noise. Self-Organized Map (SOM) is then used to train and test the feature vectors. Experimental results regarding our proposed human activity dataset and publically available dataset demonstrate that our feature extraction method is more promising and outperforms the state-of-the-art feature extraction methods.

[1]  Esa Alhoniemi,et al.  Clustering of the self-organizing map , 2000, IEEE Trans. Neural Networks Learn. Syst..

[2]  Tae-Seong Kim,et al.  Human Activity Recognition via Recognized Body Parts of Human Depth Silhouettes for Residents Monitoring Services at Smart Home , 2013 .

[3]  Tae-Seong Kim,et al.  Recognition of Human Home Activities via Depth Silhouettes and ℜ Transformation for Smart Homes , 2012 .

[4]  Xu Sun,et al.  Large-Scale Personalized Human Activity Recognition Using Online Multitask Learning , 2013, IEEE Transactions on Knowledge and Data Engineering.

[5]  Daijin Kim,et al.  Depth map-based human activity tracking and recognition using body joints features and Self-Organized Map , 2014, Fifth International Conference on Computing, Communications and Networking Technologies (ICCCNT).

[6]  Ahmad Jalal,et al.  The Mechanism of Edge Detection using the Block Matching Criteria for the Motion Estimation , 2005 .

[7]  Shaharyar Kamal,et al.  Real-time life logging via a depth silhouette-based human activity recognition system for smart home services , 2014, 2014 11th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS).

[8]  A. Jalal,et al.  Security Architecture for Third Generation (3G) using GMHS Cellular Network , 2007, 2007 International Conference on Emerging Technologies.

[9]  Young Hoon Joo,et al.  Fast and robust algorithm of tracking multiple moving objects for intelligent video surveillance systems , 2011, IEEE Transactions on Consumer Electronics.

[10]  Manolis I. A. Lourakis,et al.  Real-Time Tracking of Multiple Skin-Colored Objects with a Possibly Moving Camera , 2004, ECCV.

[11]  Andreas E. Savakis,et al.  Human Activity Recognition Using the 4D Spatiotemporal Shape Context Descriptor , 2009, ISVC.

[12]  Wanqing Li,et al.  Action recognition based on a bag of 3D points , 2010, 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Workshops.

[13]  Rama Chellappa,et al.  Machine Recognition of Human Activities: A Survey , 2008, IEEE Transactions on Circuits and Systems for Video Technology.

[14]  Ahmad Jalal,et al.  Security Enhancement for E-Learning Portal , 2008 .

[15]  Weimin Huang,et al.  Multiple People Activity Recognition Using MHT over DBN , 2011, ICOST.

[16]  Ramón P. Otero,et al.  Induction of the Effects of Actions by Monotonic Methods , 2003, ILP.

[17]  Mian Ahmad Zeb,et al.  Security and QoS Optimization for Distributed Real Time Environment , 2007, 7th IEEE International Conference on Computer and Information Technology (CIT 2007).

[18]  Daijin Kim,et al.  A spatiotemporal motion variation features extraction approach for human tracking and pose-based action recognition , 2015, 2015 International Conference on Informatics, Electronics & Vision (ICIEV).

[19]  Daijin Kim,et al.  Ridge body parts features for human pose estimation and recognition from RGB-D video data , 2014, Fifth International Conference on Computing, Communications and Networking Technologies (ICCCNT).

[20]  Ahmad Jalal,et al.  Collaboration Achievement along with Performance Maintenance in Video Streaming , 2007 .

[21]  Cristian Sminchisescu,et al.  The Moving Pose: An Efficient 3D Kinematics Descriptor for Low-Latency Action Recognition and Detection , 2013, 2013 IEEE International Conference on Computer Vision.

[22]  Meinard Müller,et al.  Motion templates for automatic classification and retrieval of motion capture data , 2006, SCA '06.

[23]  Tae-Seong Kim,et al.  Human Activity Recognition via the Features of Labeled Depth Body Parts , 2012, ICOST.

[24]  Alex Pentland,et al.  Face recognition using eigenfaces , 1991, Proceedings. 1991 IEEE Computer Society Conference on Computer Vision and Pattern Recognition.

[25]  Tae-Seong Kim,et al.  Depth video-based human activity recognition system using translation and scaling invariant features for life logging at smart home , 2012, IEEE Transactions on Consumer Electronics.

[26]  Ahmad Jalal,et al.  Multiple Facial Feature Detection Using Vertex-Modeling Structure , 2007 .

[27]  Daijin Kim,et al.  Shape and Motion Features Approach for Activity Tracking and Recognition from Kinect Video Camera , 2015, 2015 IEEE 29th International Conference on Advanced Information Networking and Applications Workshops.

[28]  Shyamsundar Rajaram,et al.  Human Activity Recognition Using Multidimensional Indexing , 2002, IEEE Trans. Pattern Anal. Mach. Intell..

[29]  Ahmad Jalal,et al.  A Complexity Removal in the Floating Point and Rate Control Phenomenon , 2005 .

[30]  Ahmad Jalal,et al.  Dense depth maps-based human pose tracking and recognition in dynamic scenes using ridge data , 2014, 2014 11th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS).

[31]  A. Jalal,et al.  Assembled algorithm in the real-time H.263 codec for advanced performance , 2005, Proceedings of 7th International Workshop on Enterprise networking and Computing in Healthcare Industry, 2005. HEALTHCOM 2005..

[32]  Chenyang Zhang,et al.  RGB-D Camera-based Daily Living Activity Recognition , 2022 .

[33]  Xiaodong Yang,et al.  Super Normal Vector for Activity Recognition Using Depth Sequences , 2014, 2014 IEEE Conference on Computer Vision and Pattern Recognition.

[34]  Ahmad Jalal,et al.  Advanced Performance Achievement using Multi- Algorithmic Approach of Video Transcoder for Low Bitrate Wireless Communication , 2005 .

[35]  Tae-Seong Kim,et al.  Daily Human Activity Recognition Using Depth Silhouettes and R\mathcal{R} Transformation for Smart Home , 2011, ICOST.

[36]  Qi Tian,et al.  Human Daily Action Analysis with Multi-view and Color-Depth Data , 2012, ECCV Workshops.

[37]  Daijin Kim,et al.  A Depth Video Sensor-Based Life-Logging Human Activity Recognition System for Elderly Care in Smart Indoor Environments , 2014, Sensors.

[38]  Toby Sharp,et al.  Real-time human pose recognition in parts from single depth images , 2011, CVPR.

[39]  Ahmad Jalal,et al.  Global Security Using Human Face Understanding under Vision Ubiquitous Architecture System , 2008 .