An Approach to Sentiment Analysis for Mobile Speech Applications

The integration of Sentiment Analysis and spoken conversational interfaces provides mutual benefits that enable using context-awareness information to enhance the performance of these interfaces, achieving a more efficient and proactive human-machine communication that can be dynamically adapted to the user's emotional state. In this paper, we describe a novel Sentiment Analysis approach combining a lexicon-based model for specifying the set of emotions and a statistical methodology to identify the most relevant topics in the document that are the targets of the sentiments. Our proposal also includes an heuristic learning method that allows improving the initial knowledge considering the users' feedback. We have integrated the proposed Sentiment Analysis approach into an Android-based mobile App that automatically assigns sentiments to pictures taking into account the description provided by the users.

[1]  K. Scherer,et al.  Acoustic profiles in vocal emotion expression. , 1996, Journal of personality and social psychology.

[2]  Jaime C. Acosta,et al.  Responding to user emotional state by adding emotional coloring to utterances , 2009, INTERSPEECH.

[3]  Zdravko Kacic,et al.  Context-Independent Multilingual Emotion Recognition from Speech Signals , 2003, Int. J. Speech Technol..

[4]  Ellen Riloff,et al.  Creating Subjective and Objective Sentence Classifiers from Unannotated Texts , 2005, CICLing.

[5]  James Ze Wang,et al.  Algorithmic inferencing of aesthetics and emotion in natural images: An exposition , 2008, 2008 15th IEEE International Conference on Image Processing.

[6]  Shyh-Kang Jeng,et al.  Emotion-Based Music Visualization Using Photos , 2008, MMM.

[7]  C. W. Hughes Emotion: Theory, Research and Experience , 1982 .

[8]  Yorick Wilks,et al.  Some background on dialogue management and conversational speech for dialogue systems , 2011, Comput. Speech Lang..

[9]  Enrique Herrera-Viedma,et al.  Sentiment analysis: A review and comparative analysis of web services , 2015, Inf. Sci..

[10]  John H. L. Hansen,et al.  Analysis and compensation of speech under stress and noise for environmental robustness in speech recognition , 1996, Speech Commun..

[11]  Wolfgang Wahlster,et al.  SmartKom: Foundations of Multimodal Dialogue Systems (Cognitive Technologies) , 2006 .

[12]  Paul Dourish,et al.  How emotion is made and measured , 2007, Int. J. Hum. Comput. Stud..

[13]  Shrikanth S. Narayanan,et al.  Toward detecting emotions in spoken dialogs , 2005, IEEE Transactions on Speech and Audio Processing.

[14]  Giuseppe Carenini,et al.  Detecting subjectivity in multiparty speech , 2009, INTERSPEECH.

[15]  Verónica Pérez-Rosas,et al.  Sentiment analysis of online spoken reviews , 2013, INTERSPEECH.

[16]  Ruili Wang,et al.  Ensemble methods for spoken emotion recognition in call-centres , 2007, Speech Commun..

[17]  Constantine Kotropoulos,et al.  Emotional speech recognition: Resources, features, and methods , 2006, Speech Commun..

[18]  Harith Alani,et al.  Contextual semantics for sentiment analysis of Twitter , 2016, Inf. Process. Manag..

[19]  Bo Pang,et al.  Thumbs up? Sentiment Classification using Machine Learning Techniques , 2002, EMNLP.

[20]  Erik Cambria,et al.  The Hourglass of Emotions , 2011, COST 2102 Training School.

[21]  Walaa Medhat,et al.  Sentiment analysis algorithms and applications: A survey , 2014 .