NuActiv: recognizing unseen new activities using semantic attribute-based learning

We study the problem of how to recognize a new human activity when we have never seen any training example of that activity before. Recognizing human activities is an essential element for user-centric and context-aware applications. Previous studies showed promising results using various machine learning algorithms. However, most existing methods can only recognize the activities that were previously seen in the training data. A previously unseen activity class cannot be recognized if there were no training samples in the dataset. Even if all of the activities can be enumerated in advance, labeled samples are often time consuming and expensive to get, as they require huge effort from human annotators or experts. In this paper, we present NuActiv, an activity recognition system that can recognize a human activity even when there are no training data for that activity class. Firstly, we designed a new representation of activities using semantic attributes, where each attribute is a human readable term that describes a basic element or an inherent characteristic of an activity. Secondly, based on this representation, a two-layer zero-shot learning algorithm is developed for activity recognition. Finally, to reinforce recognition accuracy using minimal user feedback, we developed an active learning algorithm for activity recognition. Our approach is evaluated on two datasets, including a 10-exercise-activity dataset we collected, and a public dataset of 34 daily life activities. Experimental results show that using semantic attribute-based learning, NuActiv can generalize knowledge to recognize unseen new activities. Our approach achieved up to 79% accuracy in unseen activity recognition.

[1]  Yoshua Bengio,et al.  Zero-data Learning of New Tasks , 2008, AAAI.

[2]  Emiliano Miluzzo,et al.  A survey of mobile phone sensing , 2010, IEEE Communications Magazine.

[3]  Geoffrey E. Hinton,et al.  Zero-shot Learning with Semantic Output Codes , 2009, NIPS.

[4]  Stephanie Rosenthal,et al.  Using Decision-Theoretic Experience Sampling to Build Personalized Mobile Phone Interruption Models , 2011, Pervasive.

[5]  Kristen Grauman,et al.  Relative attributes , 2011, 2011 International Conference on Computer Vision.

[6]  Gernot Bahle,et al.  What Can an Arm Holster Worn Smart Phone Do for Activity Recognition? , 2011, 2011 15th Annual International Symposium on Wearable Computers.

[7]  Jennifer Healey,et al.  A Long-Term Evaluation of Sensing Modalities for Activity Recognition , 2007, UbiComp.

[8]  Radford M. Neal Pattern Recognition and Machine Learning , 2007, Technometrics.

[9]  Silvio Savarese,et al.  Recognizing human actions by attributes , 2011, CVPR 2011.

[10]  Mirco Musolesi,et al.  Sensing meets mobile social networks: the design, implementation and evaluation of the CenceMe application , 2008, SenSys '08.

[11]  Ting Chen,et al.  Research on human activity recognition based on active learning , 2010, 2010 International Conference on Machine Learning and Cybernetics.

[12]  Mike Y. Chen,et al.  Tracking Free-Weight Exercises , 2007, UbiComp.

[13]  Bernt Schiele,et al.  Exploring semi-supervised and active learning for activity recognition , 2008, 2008 12th IEEE International Symposium on Wearable Computers.

[14]  Kristof Van Laerhoven,et al.  Detecting leisure activities with dense motif discovery , 2012, UbiComp.

[15]  Matthai Philipose,et al.  Common Sense Based Joint Training of Human Activity Recognizers , 2007, IJCAI.

[16]  Burr Settles,et al.  Active Learning , 2012, Synthesis Lectures on Artificial Intelligence and Machine Learning.

[17]  Maryam Mahdaviani,et al.  Fast and Scalable Training of Semi-Supervised CRFs with Application to Activity Recognition , 2007, NIPS.

[18]  Thomas L. Griffiths,et al.  Learning Systems of Concepts with an Infinite Relational Model , 2006, AAAI.

[19]  Chih-Jen Lin,et al.  LIBSVM: A library for support vector machines , 2011, TIST.

[20]  Yi Wang,et al.  A framework of energy efficient mobile sensing for automatic user state recognition , 2009, MobiSys '09.

[21]  Qiang Yang,et al.  Cross-domain activity recognition , 2009, UbiComp.

[22]  Irfan A. Essa,et al.  Discovering Characteristic Actions from On-Body Sensor Data , 2006, 2006 10th IEEE International Symposium on Wearable Computers.

[23]  Romit Roy Choudhury,et al.  SurroundSense: mobile phone localization via ambience fingerprinting , 2009, MobiCom '09.

[24]  Youngki Lee,et al.  SeeMon: scalable and energy-efficient context monitoring framework for sensor-rich mobile environments , 2008, MobiSys '08.

[25]  Bernt Schiele,et al.  Discovery of activity patterns using topic models , 2008 .

[26]  Fei-Fei Li,et al.  Attribute Learning in Large-Scale Datasets , 2010, ECCV Workshops.

[27]  Bernt Schiele,et al.  Weakly Supervised Recognition of Daily Life Activities with Wearable Sensors , 2011, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[28]  Zhigang Liu,et al.  Darwin phones: the evolution of sensing and inference on mobile phones , 2010, MobiSys '10.

[29]  Christopher M. Bishop,et al.  Pattern Recognition and Machine Learning (Information Science and Statistics) , 2006 .

[30]  Kristen Grauman,et al.  Interactively building a discriminative vocabulary of nameable attributes , 2011, CVPR 2011.

[31]  Ling Bao,et al.  Activity Recognition from User-Annotated Acceleration Data , 2004, Pervasive.

[32]  Youngki Lee,et al.  Orchestrator: An active resource orchestration framework for mobile context monitoring in sensor-rich mobile environments , 2010, 2010 IEEE International Conference on Pervasive Computing and Communications (PerCom).

[33]  Zhigang Liu,et al.  The Jigsaw continuous sensing engine for mobile phone applications , 2010, SenSys '10.

[34]  Bernt Schiele,et al.  Remember and transfer what you have learned - recognizing composite activities based on activity spotting , 2010, International Symposium on Wearable Computers (ISWC) 2010.

[35]  Suman Nath ACE: Exploiting Correlation for Energy-Efficient and Continuous Context Sensing , 2013, IEEE Trans. Mob. Comput..

[36]  Malik Yousef,et al.  One-Class SVMs for Document Classification , 2002, J. Mach. Learn. Res..

[37]  Wei Pan,et al.  SoundSense: scalable sound sensing for people-centric applications on mobile phones , 2009, MobiSys '09.

[38]  Deborah Estrin,et al.  Improving activity classification for health applications on mobile devices using active and semi-supervised learning , 2010, 2010 4th International Conference on Pervasive Computing Technologies for Healthcare.

[39]  Christoph H. Lampert,et al.  Learning to detect unseen object classes by between-class attribute transfer , 2009, 2009 IEEE Conference on Computer Vision and Pattern Recognition.