Fit or unfit: analysis and prediction of 'closed questions' on stack overflow

Stack Overflow is widely regarded as the most popular Community driven Question Answering (CQA) website for programmers. Questions posted on Stack Overflow which are not related to programming topics, are marked as `closed' by experienced users and community moderators. A question can be `closed' for five reasons -- duplicate, off-topic, subjective, not a real question and too localized. In this work, we present the first study of `closed' questions on Stack Overflow. We download 4 years of publicly available data which contains 3.4 Million questions. We first analyze and characterize the complete set of 0.1 Million `closed' questions. Next, we use a machine learning framework and build a predictive model to identify a `closed' question at the time of question creation. One of our key findings is that despite being marked as `closed', subjective questions contain high information value and are very popular with the users. We observe an increasing trend in the percentage of closed questions over time and find that this increase is positively correlated to the number of newly registered users. In addition, we also see a decrease in community participation to mark a `closed' question which has led to an increase in moderation job time. We also find that questions closed with the Duplicate and Off Topic labels are relatively more prone to reputation gaming. Our analysis suggests broader implications for content quality maintenance on CQA websites. For the `closed' question prediction task, we make use of multiple genres of feature sets based on - user profile, community process, textual style and question content. We use a state-of-art machine learning classifier based on an ensemble framework and achieve an overall accuracy of 70.3%. Analysis of the feature space reveals that `closed' questions are relatively less informative and descriptive than non-`closed' questions. To the best of our knowledge, this is the first experimental study to analyze and predict `closed' questions on Stack Overflow.

[1]  W. Bruce Croft,et al.  A framework to predict the quality of answers with non-textual features , 2006, SIGIR.

[2]  Lena Mamykina,et al.  Design lessons from the fastest q&a site in the west , 2011, CHI.

[3]  W. Marsden I and J , 2012 .

[4]  Bogdan Dit,et al.  An exploratory analysis of mobile development issues using stack overflow , 2013, 2013 10th Working Conference on Mining Software Repositories (MSR).

[5]  Gilad Mishne,et al.  Finding high-quality content in social media , 2008, WSDM '08.

[6]  Abram Hindle,et al.  Deficient documentation detection a methodology to locate deficient project documentation using topic analysis , 2013, 2013 10th Working Conference on Mining Software Repositories (MSR).

[7]  J. Friedman Stochastic gradient boosting , 2002 .

[8]  Haibo He,et al.  Learning from Imbalanced Data , 2009, IEEE Transactions on Knowledge and Data Engineering.

[9]  Michael R. Lyu,et al.  Analyzing and predicting question quality in community question answering services , 2012, WWW.

[10]  Jeffrey Pomerantz,et al.  Evaluating and predicting answer quality in community QA , 2010, SIGIR.

[11]  Noriko Kando,et al.  Using graded-relevance metrics for evaluating community QA answer selection , 2011, WSDM '11.

[12]  Jure Leskovec,et al.  Discovering value from community activity on focused question answering sites: a case study of stack overflow , 2012, KDD.

[13]  Christoph Treude,et al.  Crowd Documentation : Exploring the Coverage and the Dynamics of API Discussions on Stack Overflow , 2012 .

[14]  Gang Wang,et al.  Wisdom in the social crowd: an analysis of quora , 2013, WWW.

[15]  Aaas News,et al.  Book Reviews , 1893, Buffalo Medical and Surgical Journal.

[16]  Krzysztof Czarnecki,et al.  Towards improving bug tracking systems with game mechanisms , 2012, 2012 9th IEEE Working Conference on Mining Software Repositories (MSR).