Analyzing question quality through intersubjectivity: World views and objective assessments of questions on social question-answering

Social question-answering (SQA) allows people to ask questions in natural language and receive answers from others. While research on SQA has focused on the quality of answers provided with implications for system-based interventions, few studies have examined whether the questions asked to elicit these answers accurately depict an asker's information need. To address this gap, the current study explores the viability for system based interventions to improve questions by comparing human, non-textual assessments of question quality to automatic, textual features extracted from the questions' content in order to determine whether there is a significant relationship between subjective judgments on one hand, and objective ones on the other. Findings indicate that not only is there a significant relationship between human-based ratings of question quality criteria and extracted textual features, but also that distinct textual features contribute to explaining the variability of each human-based rating. These findings encourage further study of the relationship between the reasons for why a question might be of poor quality and textual features that can be extracted from the question. This relationship can ultimately inform design of intervention-based systems that can not only automatically assess question quality, but also provide reasons that can be understood by the asker as to why the quality of his or her question is poor and suggest how to revise the question.

[1]  Arthur C. Graesser,et al.  Question asking and answering. , 1994 .

[2]  Robert S. Taylor Question-Negotiation and Information Seeking in Libraries , 1968, Coll. Res. Libr..

[3]  T. Saracevic,et al.  Relevance: A review of the literature and a framework for thinking on the notion in information science. Part II: nature and manifestations of relevance , 2007, J. Assoc. Inf. Sci. Technol..

[4]  Chirag Shah,et al.  Evaluating high accuracy retrieval techniques , 2004, SIGIR '04.

[5]  W. Bruce Croft,et al.  A framework to predict the quality of answers with non-textual features , 2006, SIGIR.

[6]  Tefko Saracevic Relevance: A review of the literature and a framework for thinking on the notion in information science. Part III: Behavior and effects of relevance , 2007 .

[7]  Yong Yu,et al.  Analyzing and Predicting Not-Answered Questions in Community-based Question Answering Services , 2011, AAAI.

[8]  Rich Gazan,et al.  Seekers, sloths and social reference: Homework questions submitted to a question-answering community , 2007, New Rev. Hypermedia Multim..

[9]  Chirag Shah,et al.  Developing a typology of online Q&A models and recommending the right model for each question type , 2012, ASIST.

[10]  Sanghee Oh,et al.  Best-answer selection criteria in a social Q&A site from the user-oriented relevance perspective , 2008, ASIST.

[11]  J. Stephen Downie,et al.  Challenges in Cross-Cultural/Multilingual Music Information Seeking , 2005, ISMIR.

[12]  Aristotle,et al.  On rhetoric : a theory of civic discourse , 1993 .

[13]  Delphine Bernhard,et al.  Generating High Quality Questions from Low Quality Questions , 2008 .

[14]  Nicholas J. Belkin,et al.  Ask for Information Retrieval: Part II. Results of a Design Study , 1982, J. Documentation.

[15]  F. Maxwell Harper,et al.  Question types in social Q&A sites , 2010, First Monday.

[16]  Iryna Gurevych,et al.  Annotating Question Types in Social Q&A Sites , 2009 .

[17]  Vladimir I. Levenshtein,et al.  Binary codes capable of correcting deletions, insertions, and reversals , 1965 .

[18]  Chun Chen,et al.  Quantify query ambiguity using ODP metadata , 2007, SIGIR.

[19]  Sanghee Oh,et al.  Users' relevance criteria for evaluating answers in a social Q&A site , 2009, J. Assoc. Inf. Sci. Technol..

[20]  J. Oh,et al.  Research agenda for social Q&A , 2009 .

[21]  A. Strauss,et al.  The discovery of grounded theory: strategies for qualitative research aldine de gruyter , 1968 .

[22]  C. Brodsky The Discovery of Grounded Theory: Strategies for Qualitative Research , 1968 .

[23]  W. Bruce Croft,et al.  Predicting query performance , 2002, SIGIR '02.

[24]  R. P. Fishburne,et al.  Derivation of New Readability Formulas (Automated Readability Index, Fog Count and Flesch Reading Ease Formula) for Navy Enlisted Personnel , 1975 .

[25]  Chirag Shah,et al.  "How much change do you get from 40$?" - Analyzing and addressing failed questions on social Q&A , 2012, ASIST.

[26]  Tefko Saracevic,et al.  Relevance : A Review of the Literature and a Framework for Thinking on the Notion in Information Science . Part III : Behavior and Effects of Relevance , 1976 .

[27]  Mark S. Ackerman,et al.  Questions in, knowledge in?: a study of naver's question answering community , 2009, CHI.

[28]  Jeffrey Pomerantz,et al.  Evaluating and predicting answer quality in community QA , 2010, SIGIR.

[29]  K. Boulding,et al.  The Image: Knowledge in Life and Society. , 1956 .

[30]  Nicholas J. Belkin,et al.  Ask for Information Retrieval: Part I. Background and Theory , 1997, J. Documentation.

[31]  Sheizaf Rafaeli,et al.  Predictors of answer quality in online Q&A sites , 2008, CHI.

[32]  Thomas Luckmann,et al.  The Structures of the Life World V1 Op , 1973 .

[33]  Rich Gazan Specialists and synthesists in a question answering community , 2006, ASIST.

[34]  Eugene Agichtein,et al.  Modeling information-seeker satisfaction in community question answering , 2009, TKDD.

[35]  Eugene Agichtein,et al.  Predicting information seeker satisfaction in community question answering , 2008, SIGIR '08.

[36]  F. Maxwell Harper,et al.  Facts or friends?: distinguishing informational and conversational questions in social Q&A sites , 2009, CHI.