A Model for Automated Affect Recognition on Smartphone-Cloud Architecture

This paper proposes a model for automated affect recognition on a smartphone-cloud architecture. Whilst facial-mood recognition is becoming more advanced, our contribution is in analysis and classification of voice to supplement mood recognition. In the model we build upon previous work of others and supplement these with new algorithms.

[1]  Astrid Paeschke,et al.  A database of German emotional speech , 2005, INTERSPEECH.

[2]  Björn Schuller,et al.  Emotion recognition in the noise applying large acoustic feature sets , 2006, Speech Prosody 2006.

[3]  Björn Hartmann,et al.  How's my mood and stress?: an efficient speech analysis library for unobtrusive monitoring on mobile phones , 2011, BODYNETS.

[4]  Nicholas D. Lane,et al.  Can Deep Learning Revolutionize Mobile Sensing? , 2015, HotMobile.

[5]  Krzysztof Slot,et al.  Application of selected speech-signal characteristics to emotion recognition in Polish language , 2005 .

[6]  Zhihong Zeng,et al.  A Survey of Affect Recognition Methods: Audio, Visual, and Spontaneous Expressions , 2009, IEEE Trans. Pattern Anal. Mach. Intell..

[7]  Sridha Sridharan,et al.  Sparse Temporal Representations for Facial Expression Recognition , 2011, PSIVT.

[8]  Dong Yu,et al.  Speech emotion recognition using deep neural network and extreme learning machine , 2014, INTERSPEECH.

[9]  Michael Wagner,et al.  Characterising depressed speech for classification , 2013, INTERSPEECH.

[10]  Yongqiang Wang,et al.  An investigation of deep neural networks for noise robust speech recognition , 2013, 2013 IEEE International Conference on Acoustics, Speech and Signal Processing.

[11]  Allen Y. Yang,et al.  Robust Face Recognition via Sparse Representation , 2009, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[12]  Rosalind W. Picard Affective Computing for HCI , 1999, HCI.

[13]  John H. L. Hansen,et al.  Getting started with SUSAS: a speech under simulated and actual stress database , 1997, EUROSPEECH.

[14]  Daniel Gatica-Perez,et al.  StressSense: detecting stress in unconstrained acoustic environments using smartphones , 2012, UbiComp.

[15]  Roland Göcke,et al.  An approach for automatically measuring facial activity in depressed subjects , 2009, 2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops.

[16]  Cecilia Mascolo,et al.  DSP.Ear: leveraging co-processor support for continuous audio sensing on smartphones , 2014, SenSys.

[17]  Rajib Rana,et al.  Novel activity classification and occupancy estimation methods for intelligent HVAC (heating, ventilation and air conditioning) systems , 2015 .

[18]  Jie Liu,et al.  SpeakerSense: Energy Efficient Unobtrusive Speaker Identification on Mobile Phones , 2011, Pervasive.

[19]  Wen Hu,et al.  Real-time classification via sparse representation in acoustic sensor networks , 2013, SenSys '13.