Towards an EmoCog Model for Multimodal Empathy Prediction
暂无分享,去创建一个
[1] P. Laukka,et al. Communication of emotions in vocal expression and music performance: different channels, same code? , 2003, Psychological bulletin.
[2] R. Blair. Responding to the emotions of others: Dissociating forms of empathy through the study of typical and psychiatric populations , 2005, Consciousness and Cognition.
[3] A. Damasio,et al. The neural substrates of cognitive empathy , 2007, Social neuroscience.
[4] R. Provine. Laughter Punctuates Speech: Linguistic, Social and Gender Contexts of Laughter , 2010 .
[5] Björn Schuller,et al. Opensmile: the munich versatile and fast open-source audio feature extractor , 2010, ACM Multimedia.
[6] Chih-Jen Lin,et al. LIBSVM: A library for support vector machines , 2011, TIST.
[7] Hiroshi G. Okuno,et al. A Recipe for Empathy , 2015, Int. J. Soc. Robotics.
[8] J. Cacioppo,et al. Perceived interpersonal synchrony increases empathy: Insights from autism spectrum disorder , 2016, Cognition.
[9] Hatice Gunes,et al. CNN-based Facial Affect Analysis on Mobile Devices , 2018, ArXiv.
[10] Shan Li,et al. Deep Facial Expression Recognition: A Survey , 2018, IEEE Transactions on Affective Computing.
[11] Mohammad H. Mahoor,et al. AffectNet: A Database for Facial Expression, Valence, and Arousal Computing in the Wild , 2017, IEEE Transactions on Affective Computing.