M3B corpus: multi-modal meeting behavior corpus for group meeting assessment
暂无分享,去创建一个
Yutaka Arakawa | Yuki Matsuda | Keiichi Yasumoto | Yusuke Soneda | K. Yasumoto | Yuki Matsuda | Yutaka Arakawa | Y. Soneda
[1] Philipp Scholl,et al. A multi-media exchange format for time-series dataset curation , 2016, UbiComp Adjunct.
[2] Masakiyo Fujimoto,et al. Low-Latency Real-Time Meeting Recognition and Understanding Using Distant Microphones and Omni-Directional Camera , 2012, IEEE Transactions on Audio, Speech, and Language Processing.
[3] Peter Robinson,et al. OpenFace: An open source facial behavior analysis toolkit , 2016, 2016 IEEE Winter Conference on Applications of Computer Vision (WACV).
[4] Jean-Marc Odobez,et al. The vernissage corpus: A conversational Human-Robot-Interaction dataset , 2013, 2013 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI).
[5] Peter Bull. Posture and Gesture , 2016 .
[6] H. Gunes,et al. Multimodal Human-Human-Robot Interactions (MHHRI) Dataset for Studying Personality and Engagement , 2019, IEEE Transactions on Affective Computing.
[7] Sebastian Drude,et al. The Language Archive , 2013 .
[8] Toshio Irino,et al. Manual and Accelerometer Analysis of Head Nodding Patterns in Goal-oriented Dialogues , 2011, HCI.
[9] Kazuya Murao,et al. A method for structuring meeting logs using wearable sensors , 2019, Internet Things.