暂无分享,去创建一个
Zhao Ren | Zixing Zhang | Jing Han | Bjorn Schuller | Björn Schuller | Zixing Zhang | Jing Han | Zhao Ren
[1] Emily Mower Provost,et al. Cross-Corpus Acoustic Emotion Recognition with Multi-Task Learning: Seeking Common Ground While Preserving Differences , 2019, IEEE Transactions on Affective Computing.
[2] Carlos Busso,et al. Correcting Time-Continuous Emotional Labels by Modeling the Reaction Lag of Evaluators , 2015, IEEE Transactions on Affective Computing.
[3] Fabien Ringeval,et al. Prediction-based learning for continuous emotion recognition in speech , 2017, 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).
[4] Björn Schuller,et al. Strength modelling for real-worldautomatic continuous affect recognition from audiovisual signals , 2017, Image Vis. Comput..
[5] Dacheng Tao,et al. Trunk-Branch Ensemble Convolutional Neural Networks for Video-Based Face Recognition , 2016, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[6] Björn W. Schuller,et al. Low-Level Fusion of Audio, Video Feature for Multi-Modal Emotion Recognition , 2008, VISAPP.
[7] James Philbin,et al. FaceNet: A unified embedding for face recognition and clustering , 2015, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[8] Fabien Ringeval,et al. AV+EC 2015: The First Affect Recognition Challenge Bridging Across Audio, Video, and Physiological Data , 2015, AVEC@ACM Multimedia.
[9] Björn Schuller,et al. Opensmile: the munich versatile and fast open-source audio feature extractor , 2010, ACM Multimedia.
[10] Hatice Gunes,et al. Continuous Prediction of Spontaneous Affect from Multiple Cues and Modalities in Valence-Arousal Space , 2011, IEEE Transactions on Affective Computing.
[11] Jian Huang,et al. Speech Emotion Recognition from Variable-Length Inputs with Triplet Loss Function , 2018, INTERSPEECH.
[12] Björn W. Schuller,et al. From Hard to Soft: Towards more Human-like Emotion Recognition by Modelling the Perception Uncertainty , 2017, ACM Multimedia.
[13] Björn Schuller,et al. Emotion Recognition in Speech with Latent Discriminative Representations Learning , 2018, Acta Acustica united with Acustica.
[14] George Trigeorgis,et al. End-to-End Multimodal Emotion Recognition Using Deep Neural Networks , 2017, IEEE Journal of Selected Topics in Signal Processing.
[15] Fabien Ringeval,et al. Introducing the RECOLA multimodal corpus of remote collaborative and affective interactions , 2013, 2013 10th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG).
[16] Larry P. Heck,et al. Learning deep structured semantic models for web search using clickthrough data , 2013, CIKM.
[17] Wojciech Zaremba,et al. An Empirical Exploration of Recurrent Network Architectures , 2015, ICML.
[18] Reza Lotfian,et al. Curriculum Learning for Speech Emotion Recognition From Crowdsourced Labels , 2018, IEEE/ACM Transactions on Audio, Speech, and Language Processing.
[19] Changsheng Xu,et al. Learning Consistent Feature Representation for Cross-Modal Multimedia Retrieval , 2015, IEEE Transactions on Multimedia.
[20] Stefan Wermter,et al. Evaluating Integration Strategies for Visuo-Haptic Object Recognition , 2017, Cognitive Computation.
[21] Zhihong Zeng,et al. Audio-visual affect recognition through multi-stream fused HMM for HCI , 2005, 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05).
[22] Stefan Wermter,et al. The OMG-Emotion Behavior Dataset , 2018, 2018 International Joint Conference on Neural Networks (IJCNN).
[23] Jacob Cohen,et al. Applied multiple regression/correlation analysis for the behavioral sciences , 1979 .
[24] Samy Bengio,et al. Large Scale Online Learning of Image Similarity Through Ranking , 2009, J. Mach. Learn. Res..
[25] Andrew Zisserman,et al. Emotion Recognition in Speech using Cross-Modal Transfer in the Wild , 2018, ACM Multimedia.
[26] Eduardo Coutinho,et al. Distributing Recognition in Computational Paralinguistics , 2014, IEEE Transactions on Affective Computing.
[27] Zoraida Callejas Carrión,et al. Sentiment Analysis: From Opinion Mining to Human-Agent Interaction , 2016, IEEE Transactions on Affective Computing.
[28] Zhigang Deng,et al. Analysis of emotion recognition using facial expressions, speech and multimodal information , 2004, ICMI '04.
[29] Qin Jin,et al. Multi-modal Multi-cultural Dimensional Continues Emotion Recognition in Dyadic Interactions , 2018, AVEC@MM.
[30] Erik Cambria,et al. Affective Computing and Sentiment Analysis , 2016, IEEE Intelligent Systems.
[31] Andrew Zisserman,et al. Deep Face Recognition , 2015, BMVC.
[32] Soraia M. Alarcão,et al. Emotions Recognition Using EEG Signals: A Survey , 2019, IEEE Transactions on Affective Computing.
[33] Carlos Busso,et al. Domain Adversarial for Acoustic Emotion Recognition , 2018, IEEE/ACM Transactions on Audio, Speech, and Language Processing.
[34] Björn W. Schuller,et al. Context-Sensitive Learning for Enhanced Audiovisual Emotion Classification , 2012, IEEE Transactions on Affective Computing.
[35] Fabien Ringeval,et al. AVEC 2016: Depression, Mood, and Emotion Recognition Workshop and Challenge , 2016, AVEC@ACM Multimedia.
[36] Michael Wagner,et al. Multimodal Depression Detection: Fusion Analysis of Paralinguistic, Head Pose and Eye Gaze Behaviors , 2018, IEEE Transactions on Affective Computing.
[37] Zixing Zhang,et al. Adversarial Training in Affective Computing and Sentiment Analysis: Recent Advances and Perspectives , 2018, ArXiv.
[38] Zhihong Zeng,et al. A Survey of Affect Recognition Methods: Audio, Visual, and Spontaneous Expressions , 2007, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[39] Antonio Torralba,et al. SoundNet: Learning Sound Representations from Unlabeled Video , 2016, NIPS.
[40] Aurobinda Routray,et al. Automatic facial expression recognition using features of salient facial patches , 2015, IEEE Transactions on Affective Computing.
[41] Fabien Ringeval,et al. AVEC 2018 Workshop and Challenge: Bipolar Disorder and Cross-Cultural Affect Recognition , 2018, AVEC@MM.
[42] Angeliki Metallinou,et al. Decision level combination of multiple modalities for recognition and analysis of emotional expression , 2010, 2010 IEEE International Conference on Acoustics, Speech and Signal Processing.
[43] Erik Cambria,et al. Fusing audio, visual and textual clues for sentiment analysis from multimodal content , 2016, Neurocomputing.
[44] Björn W. Schuller,et al. A multitask approach to continuous five-dimensional affect sensing in natural speech , 2012, TIIS.
[45] Fabien Ringeval,et al. Reconstruction-error-based learning for continuous emotion recognition in speech , 2017, 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).
[46] Yu Qiao,et al. Joint Face Detection and Alignment Using Multitask Cascaded Convolutional Networks , 2016, IEEE Signal Processing Letters.
[47] Yang Yang,et al. Adversarial Cross-Modal Retrieval , 2017, ACM Multimedia.
[48] Akane Sano,et al. Personalized Multitask Learning for Predicting Tomorrow's Mood, Stress, and Health , 2020, IEEE Transactions on Affective Computing.
[49] Yoshua Bengio,et al. On the Properties of Neural Machine Translation: Encoder–Decoder Approaches , 2014, SSST@EMNLP.
[50] Yang Liu,et al. A Multi-Task Learning Framework for Emotion Recognition Using 2D Continuous Space , 2017, IEEE Transactions on Affective Computing.
[51] Eduardo Coutinho,et al. Dynamic Difficulty Awareness Training for Continuous Emotion Prediction , 2018, IEEE Transactions on Multimedia.
[52] Björn W. Schuller,et al. The Geneva Minimalistic Acoustic Parameter Set (GeMAPS) for Voice Research and Affective Computing , 2016, IEEE Transactions on Affective Computing.
[53] Changsheng Xu,et al. Cross-Domain Feature Learning in Multimedia , 2015, IEEE Transactions on Multimedia.
[54] Nir Ailon,et al. Deep Metric Learning Using Triplet Network , 2014, SIMBAD.
[55] Fakhri Karray,et al. Survey on speech emotion recognition: Features, classification schemes, and databases , 2011, Pattern Recognit..