Crowdsourced Multimodal Corpora Collection Tool

In recent years, more and more multimodal corpora have been created. To our knowledge there is no publicly available tool which allows for acquiring controlled multimodal data of people in a rapid ...

[1]  Louis-Philippe Morency,et al.  A probabilistic multimodal approach for predicting listener backchannels , 2009, Autonomous Agents and Multi-Agent Systems.

[2]  Robert Pless,et al.  Do you see what I see: crowdsource annotation of captured scenes , 2013, SenseCam '13.

[3]  Cyrus Rashtchian,et al.  Collecting Image Annotations Using Amazon’s Mechanical Turk , 2010, Mturk@HLT-NAACL.

[4]  Jens Edlund,et al.  Spontal: A Swedish Spontaneous Dialogue Corpus of Audio, Video and Motion Capture , 2010, LREC.

[5]  Bajibabu Bollepalli,et al.  The Tutorbot Corpus ― A Corpus for Studying Tutoring Behaviour in Multiparty Face-to-Face Spoken Dialogue , 2014, LREC.

[6]  Jochen J. Steil,et al.  Robots Show Us How to Teach Them: Feedback from Robots Shapes Tutoring Behavior during Action Learning , 2014, PloS one.

[7]  Boyang Li,et al.  Story Generation with Crowdsourced Plot Graphs , 2013, AAAI.

[8]  Björn W. Schuller,et al.  Building Autonomous Sensitive Artificial Listeners , 2012, IEEE Transactions on Affective Computing.

[9]  Eric Sanders,et al.  The IFADV Corpus: a Free Dialog Video Corpus , 2008, LREC.

[10]  Jean-Marc Odobez,et al.  Towards building an attentive artificial listener: on the perception of attentiveness in audio-visual feedback tokens , 2016, ICMI.

[11]  Jean Carletta,et al.  Unleashing the killer corpus: experiences in creating the multi-everything AMI Meeting Corpus , 2007, Lang. Resour. Evaluation.

[12]  Jeff Orkin,et al.  Automatic learning and generation of social behavior from collective human gameplay , 2009, AAMAS.

[13]  Joakim Gustafson,et al.  Towards Building an Attentive Artificial Listener: On the Perception of Attentiveness in Feedback Utterances , 2016, INTERSPEECH.

[14]  Dimosthenis Kontogiorgos,et al.  Crowd-Sourced Design of Artificial Attentive Listeners , 2017, INTERSPEECH.

[15]  Dimosthenis Kontogiorgos,et al.  Crowd-Powered Design of Virtual Attentive Listeners , 2017, IVA.

[16]  J. Beskow,et al.  MushyPeek: A Framework for Online Investigation of Audiovisual Dialogue Phenomena , 2009, Language and speech.

[17]  Petra Wagner,et al.  D64: a corpus of richly recorded conversational interaction , 2013, Journal on Multimodal User Interfaces.

[18]  Ian R. Lane,et al.  Tools for Collecting Speech Corpora via Mechanical-Turk , 2010, Mturk@HLT-NAACL.

[19]  Khalid Choukri,et al.  The CHIL audiovisual corpus for lecture and meeting analysis inside smart rooms , 2007, Lang. Resour. Evaluation.

[20]  Dirk Heylen,et al.  The Sensitive Artificial Listner: an induction technique for generating emotionally coloured conversation , 2008 .

[21]  Boyang Li,et al.  Semi-situated learning of verbal and nonverbal content for repeated human-robot interaction , 2016, ICMI.

[22]  Cynthia Breazeal,et al.  Crowdsourcing human-robot interaction , 2013, HRI 2013.

[23]  Mirko Gelsomini,et al.  Telling Stories to Robots: The Effect of Backchanneling on a Child's Storytelling * , 2017, 2017 12th ACM/IEEE International Conference on Human-Robot Interaction (HRI.

[24]  Hayley Hung,et al.  The idiap wolf corpus: exploring group behaviour in a competitive role-playing game , 2010, ACM Multimedia.

[25]  Elena Filatova,et al.  Irony and Sarcasm: Corpus Generation and Analysis Using Crowdsourcing , 2012, LREC.

[26]  Ian McGraw,et al.  A self-transcribing speech corpus: collecting continuous speech with an online educational game , 2009, SLaTE.

[27]  Jean-Marc Odobez,et al.  Who Will Get the Grant?: A Multimodal Corpus for the Analysis of Conversational Behaviours in Group Interviews , 2014, UM3I '14.

[28]  Jean Carletta,et al.  A shallow model of backchannel continuers in spoken dialogue , 2003 .

[29]  Nigel G. Ward,et al.  Prosodic features which cue back-channel responses in English and Japanese , 2000 .