Using Semantic Web for Generating Questions: Do Different Populations Perceive Questions Differently?

In this paper, I propose an approach to using semantic web data for generating questions that are intended to help people develop arguments in a discussion session. Applying this approach, a question generation system that exploits WordNet for generating questions for argumentation has been developed. This paper describes a study that investigates a research question of whether different populations perceive questions (either generated by a system or by human experts) differently. To conduct this study, I asked eight human experts of the argumentation and the question generation communities to construct questions for three discussion topics and used a question generation system for generating questions for argumentation. Then, the author invited three groups of researchers to rate the mix of questions: (1) computer scientists, (2) researchers of the argumentation and question generation communities, and (3) student teachers for Computer Science. The evaluation study showed that human-generated questions were perceived differently by three different populations over three quality criteria (the understandability, the relevance, and the usefulness). For system-generated questions, the hypothesis could only be confirmed on the criteria of relevance and usefulness of questions. This contribution of the paper motivates researchers of question generation to deploy various techniques to generate questions adaptively for different target groups.

[1]  Noah A. Smith,et al.  Question Generation via Overgenerating Transformations and Ranking , 2009 .

[2]  Elmostapha Elkhouzai,et al.  Classroom Interaction: Investigating the Forms and Functions of Teacher Questions in Moroccan Primary School , 2014 .

[3]  A. Graesser,et al.  Mechanisms that generate questions , 1992 .

[4]  Carolyn Penstein Rosé,et al.  Automatically Generating Discussion Questions , 2013, AIED.

[5]  Steven Clayman The News Interview: Subject index , 2002 .

[6]  George A. Miller,et al.  WordNet: A Lexical Database for English , 1995, HLT.

[7]  Robert T. Pate,et al.  Guiding Learning Through Skilful Questioning , 1967, The Elementary School Journal.

[8]  Paul Drew,et al.  Analyzing talk at work: an introduction , 1992 .

[9]  B. Bloom,et al.  Taxonomy of Educational Objectives. Handbook I: Cognitive Domain , 1966 .

[10]  Simone Paolo Ponzetto,et al.  BabelNet: The automatic construction, evaluation and application of a wide-coverage multilingual semantic network , 2012, Artif. Intell..

[11]  Angela T. Barlow,et al.  The Impact of Problem Posing on Elementary Teachers' Beliefs About Mathematics and Mathematics Teaching , 2006 .

[12]  Brian C. Nelson,et al.  Effects of visual cues and self-explanation prompts: empirical evidence in a multimedia environment , 2016, Interact. Learn. Environ..

[13]  A. Graesser,et al.  PREG: Elements of a Model of Question Asking , 2001 .

[14]  Gerhard Weikum,et al.  YAGO2: A Spatially and Temporally Enhanced Knowledge Base from Wikipedia: Extended Abstract , 2013, IJCAI.

[15]  S. Clayman,et al.  The News Interview: Index of names , 2002 .

[16]  Josh D. Tenenberg,et al.  Knowing what I know: An investigation of undergraduate knowledge and self-knowledge of data structures , 2005, Comput. Sci. Educ..

[17]  Fu-Yun Yu,et al.  The Effects of Student Question-Generation with Online Prompts on Learning , 2014, J. Educ. Technol. Soc..

[18]  Tsukasa Hirashima,et al.  Automated Question Generation Methods for Intelligent English Learning Systems and its Evaluation , 2001 .

[19]  Andrea Esuli,et al.  SentiWordNet 3.0: An Enhanced Lexical Resource for Sentiment Analysis and Opinion Mining , 2010, LREC.

[20]  Gabriela Arias de Sanchez The Art of Questioning: Using Bloom’s Taxonomy in the Elementary School Classroom , 2013 .

[21]  Jack Mostow,et al.  Generating Instruction Automatically for the Reading Strategy of Self-Questioning , 2009, AIED.

[22]  Kazuhisa Seta,et al.  Content-Dependent Question Generation for History Learning in Semantic Open Learning Space , 2014, Intelligent Tutoring Systems.

[23]  Ming Liu,et al.  G-Asks: An Intelligent Automatic Question Generation System for Academic Writing Support , 2012, Dialogue Discourse.

[24]  J. Beck,et al.  When the Rubber Meets the Road : Lessons from the In-School Adventures of an Automated Reading Tutor That Listens 1 , 2003 .

[25]  Frank Burton,et al.  Order in Court , 1979, The Routledge Handbook of Forensic Linguistics.

[26]  Eddo Rigotti,et al.  Comparing the Argumentum Model of Topics to Other Contemporary Approaches to Argument Schemes: The Procedural and Material Components , 2010 .

[27]  Anna-Lan Huang,et al.  Similarity Measures for Text Document Clustering , 2008 .

[28]  Niels Pinkwart,et al.  Evaluation of a question generation approach using semantic web for supporting argumentation , 2015, Res. Pract. Technol. Enhanc. Learn..

[29]  J. T. Dillon Questioning and Teaching: A Manual of Practice , 1988 .

[30]  S. Dumais Latent Semantic Analysis. , 2005 .

[31]  N. Morgan,et al.  Asking Better Questions , 2001 .

[32]  Kazuhisa Seta,et al.  Automatic question generation for supporting argumentation , 2014, Vietnam Journal of Computer Science.

[33]  Fabian M. Suchanek,et al.  YAGO3: A Knowledge Base from Multilingual Wikipedias , 2015, CIDR.

[34]  Gerhard Weikum,et al.  WWW 2007 / Track: Semantic Web Session: Ontologies ABSTRACT YAGO: A Core of Semantic Knowledge , 2022 .

[35]  Kam-Fai Wong,et al.  Interpreting TF-IDF term weights as making relevance decisions , 2008, TOIS.

[36]  S. Clayman The News Interview: Journalists and Public Figures on the Air , 2002 .