Automatic Recognition of Personality Traits: A Multimodal Approach

A system being capable of recognize personality traits may be utilized in an enormous number of applications. Adding personality-dependency may be useful to build speaker-adaptive models, e.g., to improve Spoken Dialogue Systems (SDSs) or to monitor agents in call-centers. Therefore, the First Audio/Visual Mapping Personality Traits Challenge (MAPTRAITS 2014) focuses on estimating personality traits. In this context, this study presents the results for multimodal recognition of personality traits using support vector machines. As only small portions of the data is used for personality estimation at a time (which are later combined to a final estimate), different segmentation methods (and how to derive a final hypothesis) are analyzed regarding the task as both a regression and a classification problem.

[1]  Zhiyong Feng,et al.  Multi-Modal Emotion Recognition Fusing Video and Audio , 2013 .

[2]  Tamás D. Gedeon,et al.  Emotion Recognition In The Wild Challenge 2014: Baseline, Data and Protocol , 2014, ICMI.

[3]  Björn W. Schuller,et al.  The INTERSPEECH 2009 emotion challenge , 2009, INTERSPEECH.

[4]  Matti Pietikäinen,et al.  Face Description with Local Binary Patterns: Application to Face Recognition , 2006, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[5]  Andrea Cavallaro,et al.  Local Zernike Moment Representation for Facial Affect Recognition , 2013, BMVC.

[6]  Michel F. Valstar,et al.  Local Gabor Binary Patterns from Three Orthogonal Planes for Automatic Facial Expression Recognition , 2013, 2013 Humaine Association Conference on Affective Computing and Intelligent Interaction.

[7]  Björn W. Schuller,et al.  AVEC 2012: the continuous audio/visual emotion challenge , 2012, ICMI '12.

[8]  E. Vesterinen,et al.  Affective Computing , 2009, Encyclopedia of Biometrics.

[9]  Zhigang Deng,et al.  Analysis of emotion recognition using facial expressions, speech and multimodal information , 2004, ICMI '04.

[10]  Björn W. Schuller,et al.  AVEC 2013: the continuous audio/visual emotion and depression recognition challenge , 2013, AVEC@ACM Multimedia.

[11]  Björn Schuller,et al.  Opensmile: the munich versatile and fast open-source audio feature extractor , 2010, ACM Multimedia.

[12]  Colin Grubb Multimodal Emotion Recognition , 2013 .

[13]  Stefan Ultes,et al.  Emotions are a personal thing: Towards speaker-adaptive emotion recognition , 2014, 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).

[14]  Tamás D. Gedeon,et al.  Emotion recognition using PHOG and LPQ features , 2011, Face and Gesture 2011.

[15]  Gary William Flake,et al.  Efficient SVM Regression Training with SMO , 2002, Machine Learning.

[16]  Fabio Valente,et al.  The INTERSPEECH 2013 computational paralinguistics challenge: social signals, conflict, emotion, autism , 2013, INTERSPEECH.

[17]  Björn W. Schuller,et al.  AVEC 2011-The First International Audio/Visual Emotion Challenge , 2011, ACII.

[18]  Björn W. Schuller,et al.  MAPTRAITS 2014: The First Audio/Visual Mapping Personality Traits Challenge , 2014, MAPTRAITS '14.

[19]  Björn W. Schuller,et al.  The INTERSPEECH 2010 paralinguistic challenge , 2010, INTERSPEECH.

[20]  Enrique Argones-Rúa,et al.  Audiovisual three-level fusion for continuous estimation of Russell's emotion circumplex , 2013, AVEC@ACM Multimedia.

[21]  S. Sathiya Keerthi,et al.  Improvements to the SMO algorithm for SVM regression , 2000, IEEE Trans. Neural Networks Learn. Syst..

[22]  Maja Pantic,et al.  This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. IEEE TRANSACTIONS ON AFFECTIVE COMPUTING , 2022 .

[23]  Xiaoyang Tan,et al.  Fusing Gabor and LBP Feature Sets for Kernel-Based Face Recognition , 2007, AMFG.

[24]  Björn W. Schuller,et al.  MAPTRAITS 2014 - The First Audio/Visual Mapping Personality Traits Challenge - An Introduction: Perceived Personality and Social Dimensions , 2014, ICMI.

[25]  Gang Wei,et al.  Speech emotion recognition based on HMM and SVM , 2005, 2005 International Conference on Machine Learning and Cybernetics.

[26]  Elmar Nöth,et al.  The INTERSPEECH 2012 Speaker Trait Challenge , 2012, INTERSPEECH.

[27]  Chung-Hsien Wu,et al.  Emotion Recognition of Affective Speech Based on Multiple Classifiers Using Acoustic-Prosodic Information and Semantic Labels , 2015, IEEE Transactions on Affective Computing.