Assessing the Credibility of Claims on the Web

In my doctoral research, I plan to address the problem of assessing the credibility of arbitrary claims made in natural-language text — in an open-domain setting. Automatic credibility assessment is a complex task depending upon many factors. To start with, we propose three factors which can help in assessing the credibility of textual claims: (i) the reliability of the web sources talking about the claim, (ii) the language style of the articles reporting the claim and, (iii) their stance (i.e., support or refute) towards the claim. In addition, we also focus on extracting user-interpretable explanations as evidence supporting the verdict of the assessment.

[1]  Gerhard Weikum,et al.  Credibility Assessment of Textual Claims on the Web , 2016, CIKM.

[2]  Jure Leskovec,et al.  Disinformation on the Web: Impact, Characteristics, and Detection of Wikipedia Hoaxes , 2016, WWW.

[3]  Bo Zhao,et al.  A Survey on Truth Discovery , 2015, SKDD.

[4]  Gerhard Weikum,et al.  Leveraging Joint Interactions for Credibility Analysis in News Communities , 2015, CIKM.

[5]  Bo Zhao,et al.  On the Discovery of Evolving Truth , 2015, KDD.

[6]  Heng Ji,et al.  Modeling Truth Existence in Truth Discovery , 2015, KDD.

[7]  Wei Zhang,et al.  Knowledge-Based Trust: Estimating the Trustworthiness of Web Sources , 2015, Proc. VLDB Endow..

[8]  Ophir Frieder,et al.  Extracting Adverse Drug Reactions from Social Media , 2015, AAAI.

[9]  Gerhard Weikum,et al.  People on drugs: credibility of user statements in health communities , 2014, KDD.

[10]  Tom M. Mitchell,et al.  Language-Aware Truth Assessment of Fact Candidates , 2014, ACL.

[11]  Marilyn A. Walker,et al.  Collective Stance Classification of Posts in Online Debate Forums , 2014 .

[12]  Kyomin Jung,et al.  Prominent Features of Rumor Propagation in Online Social Media , 2013, 2013 IEEE 13th International Conference on Data Mining.

[13]  Anupam Joshi,et al.  Faking Sandy: characterizing and identifying fake images on Twitter during Hurricane Sandy , 2013, WWW.

[14]  Andrew Trotman,et al.  Report on INEX 2013 , 2013, SIGIR Forum.

[15]  Hai Zhao,et al.  Using Deep Linguistic Features for Finding Deceptive Opinion Spam , 2012, COLING.

[16]  Divesh Srivastava,et al.  Truth Finding on the Deep Web: Is the Problem Solved? , 2012, Proc. VLDB Endow..

[17]  Fan Yang,et al.  Automatic detection of rumor on Sina Weibo , 2012, MDS '12.

[18]  Marilyn A. Walker,et al.  Stance Classification using Dialogic Properties of Persuasion , 2012, NAACL.

[19]  Bo Zhao,et al.  A Bayesian Approach to Discovering Truth from Conflicting Sources for Data Integration , 2012, Proc. VLDB Endow..

[20]  James Allan,et al.  Evidence finding using a collection of books , 2011, BooksOnline '11.

[21]  Dan Roth,et al.  Content-driven trust propagation framework , 2011, KDD.

[22]  Dragomir R. Radev,et al.  Rumor has it: Identifying Misinformation in Microblogs , 2011, EMNLP.

[23]  Clement T. Yu,et al.  T-verifier: Verifying truthfulness of fact statements , 2011, 2011 IEEE 27th International Conference on Data Engineering.

[24]  Barbara Poblete,et al.  Information credibility on twitter , 2011, WWW.

[25]  Swapna Somasundaran,et al.  Recognizing Stances in Online Debates , 2009, ACL.

[26]  Philip S. Yu,et al.  Truth Discovery with Multiple Conflicting Information Providers on the Web , 2007, IEEE Transactions on Knowledge and Data Engineering.