Stars2: a corpus of object descriptions in a visual domain

This paper presents the Stars2 corpus of definite descriptions for referring expression generation (REG). The corpus was produced in collaborative communication involving speaker-hearer pairs, and includes situations of reference that are arguably under-represented in similar work. Stars2 is intended as an incremental contribution to the research in REG and related fields, and it may be used both as training/test data for algorithms of this kind, and also to gain further insights into reference phenomena in general, with a particular focus on the issue of attribute choice in referential overspecification.

[1]  Vicente Ordonez,et al.  ReferItGame: Referring to Objects in Photographs of Natural Scenes , 2014, EMNLP.

[2]  L. R. Dice Measures of the Amount of Ecologic Association Between Species , 1945 .

[3]  Kees van Deemter,et al.  Reference and the facilitation of search in spatial domains , 2014 .

[4]  Ivandré Paraboni An algorithm for generating document-deictic references , 2000 .

[5]  Ehud Reiter,et al.  Book Reviews: Building Natural Language Generation Systems , 2000, CL.

[6]  Emiel Krahmer,et al.  Computational Generation of Referring Expressions: A Survey , 2012, CL.

[7]  Albert Gatt,et al.  The TUNA-REG Challenge 2009: Overview and Evaluation Results , 2009, ENLG.

[8]  Robert Dale,et al.  Referring Expression Generation through Attribute-Based Heuristics , 2009, ENLG.

[9]  A. Meyer,et al.  Tracking the time course of multidimensional stimulus discrimination: Analyses of viewing patterns and processing times during “same”-“different“ decisions , 2002 .

[10]  T. Pechmann Incremental speech production and referential overspecification , 1989 .

[11]  Kees van Deemter,et al.  Natural Reference to Objects in a Visual Domain , 2010, INLG.

[12]  Jon Oberlander,et al.  Generating Instructions in Virtual Environments (GIVE):A Challenge and an Evaluation Testbed for NLG , 2007 .

[13]  Luke S. Zettlemoyer,et al.  Learning Distributions over Logical Forms for Referring Expression Generation , 2013, EMNLP.

[14]  Thiago Castro Ferreira,et al.  Referring Expression Generation: Taking Speakers' Preferences into Account , 2014, TSD.

[15]  Siobhan Chapman Logic and Conversation , 2005 .

[16]  Robert Dale,et al.  Computational Interpretations of the Gricean Maxims in the Generation of Referring Expressions , 1995, Cogn. Sci..

[17]  Ivandré Paraboni,et al.  Generating Relational Descriptions Involving Mutual Disambiguation , 2014, CICLing.

[18]  Robert Dale,et al.  GRE3D7: A Corpus of Distinguishing Descriptions for Objects in Visual Scenes , 2011 .

[19]  Robert Dale,et al.  Cooking Up Referring Expressions , 1989, ACL.

[20]  Thiago Castro Ferreira,et al.  Classification-Based Referring Expression Generation , 2014, CICLing.

[21]  R. Dale,et al.  Content determination in the generation of referring expressions , 1991 .

[22]  Ivandré Paraboni,et al.  From Semantic Properties to Surface Text: the Generation of Domain Object Descriptions , 2010, Inteligencia Artif..

[23]  Rebecca J. Passonneau,et al.  Measuring Agreement on Set-valued Items (MASI) for Semantic and Pragmatic Annotation , 2006, LREC.

[24]  M. Elsner,et al.  Where's Wally: the influence of visual salience on referring expression generation , 2013, Front. Psychol..

[25]  Emiel Krahmer,et al.  Production of referring expressions: Preference trumps discrimination , 2013, CogSci.

[26]  John D. Kelleher,et al.  Applying Computational Models of Spatial Prepositions to Visually Situated Dialog , 2009, CL.

[27]  Ielka van der Sluis,et al.  Generation of Referring Expressions: Assessing the Incremental Algorithm , 2012, Cogn. Sci..

[28]  Ivandré Paraboni,et al.  Generating Spatial Referring Expressions in Interactive 3D Worlds , 2015, Spatial Cogn. Comput..

[29]  Ielka van der Sluis,et al.  Evaluating algorithms for the Generation of Referring Expressions using a balanced corpus , 2007, ENLG.

[30]  Deb Roy,et al.  Grounded Semantic Composition for Visual Scenes , 2011, J. Artif. Intell. Res..