Crowd-O-Meter: Predicting if a Person Is Vulnerable to Believe Political Claims

Social media platforms have been criticized for promoting false information during the 2016 U.S. presidential election campaign. Our work is motivated by the idea that a platform could reduce the circulation of false information if it could estimate whether its users are vulnerable to believing political claims. We here explore whether such a vulnerability could be measured in a crowdsourcing setting. We propose Crowd-O-Meter, a framework that automatically predicts if a crowd worker will be consistent in his/her beliefs about political claims; i.e., consistently believes the claims are true or consistently believes the claims are not true. Crowd-O-Meter is a user-centered approach which interprets a combination of cues characterizing the user’s implicit and explicit opinion bias. Experiments on 580 quotes from PolitiFact’s fact checking corpus of 2016 U.S. presidential candidates show that Crowd-O-Meter is precise and accurate for two news modalities: text and video. Our analysis also reveals which are the most informative cues of a person’s vulnerability.

[1]  Virgílio A. F. Almeida,et al.  From bias to opinion: a transfer-learning approach to real-time sentiment analysis , 2011, KDD.

[2]  Aniket Kittur,et al.  Instrumenting the crowd: using implicit behavioral measures to predict task performance , 2011, UIST.

[3]  Fang Liu,et al.  Rumors on Social Media in disasters: Extending Transmission to Retransmission , 2014, PACIS.

[4]  Douglas Walton,et al.  Bias, Critical Doubt, and Fallacies , 1991 .

[5]  J. Dovidio,et al.  Implicit and explicit prejudice and interracial interaction. , 2002, Journal of personality and social psychology.

[6]  Margrit Betke,et al.  Predicting Quality of Crowdsourced Image Segmentations from Crowd Behavior , 2015, HCOMP.

[7]  Kristen Grauman,et al.  What's it going to cost you?: Predicting effort vs. informativeness for multi-label image annotations , 2009, 2009 IEEE Conference on Computer Vision and Pattern Recognition.

[8]  S. Kraus Winners of the First 1960 Televised Presidential Debate Between Kennedy and Nixon , 1996 .

[9]  D. Tingley,et al.  “Who are these people?” Evaluating the demographic characteristics and political preferences of MTurk survey respondents , 2015 .

[10]  Gabriella Kazai,et al.  Quality Management in Crowdsourcing using Gold Judges Behavior , 2016, WSDM.

[11]  Panagiotis G. Ipeirotis,et al.  Running Experiments on Amazon Mechanical Turk , 2010, Judgment and Decision Making.

[12]  Lei Shi,et al.  She gets a sports car from our donation: rumor transmission in a Chinese microblogging community , 2013, CSCW.

[13]  Matthew Lease,et al.  MmmTurkey: A Crowdsourcing Framework for Deploying Tasks and Recording Worker Behavior on Amazon Mechanical Turk , 2016, ArXiv.

[14]  David G. Rand,et al.  The promise of Mechanical Turk: how online labor markets can help theorists run behavioral experiments. , 2012, Journal of theoretical biology.

[15]  James N. Druckman,et al.  The Power of Television Images: The First Kennedy-Nixon Debate Revisited , 2003, The Journal of Politics.

[16]  Bill Tomlinson,et al.  Who are the crowdworkers?: shifting demographics in mechanical turk , 2010, CHI Extended Abstracts.

[17]  Li Zeng,et al.  #Unconfirmed: Classifying Rumor Stance in Crisis-Related Social Media Messages , 2016, ICWSM.

[18]  Jeffrey A. Gottfried,et al.  News use across social media platforms 2016 , 2016 .

[19]  Qiaozhu Mei,et al.  Enquiring Minds: Early Detection of Rumors in Social Media from Enquiry Posts , 2015, WWW.

[20]  Anthony G. Greenwald,et al.  Implicit Bias in the Courtroom , 2012 .

[21]  Eric Gilbert,et al.  A Parsimonious Language Model of Social Media Credibility Across Disparate Events , 2017, CSCW.

[22]  Andreas Vlachos,et al.  Fact Checking: Task definition and dataset construction , 2014, LTCSS@ACL.

[23]  W. Hofmann,et al.  A Meta-Analysis on the Correlation Between the Implicit Association Test and Explicit Self-Report Measures , 2005, Personality & social psychology bulletin.

[24]  William P. Eveland,et al.  Understanding the Relationship Between Communication and Political Knowledge: A Model Comparison Approach Using Panel Data , 2005 .

[25]  Amaia Salvador,et al.  Click'n'Cut: Crowdsourced Interactive Segmentation with Object Candidates , 2014, CrowdMM '14.

[26]  R. Garland The Mid-Point on a Rating Scale: Is it Desirable? , 1991 .

[27]  Tara S. Behrend,et al.  The viability of crowdsourcing for survey research , 2011, Behavior research methods.

[28]  Le Song,et al.  Fake News Mitigation via Point Process Based Intervention , 2017, ICML.

[29]  M. Gentzkow,et al.  Social Media and Fake News in the 2016 Election , 2017 .

[30]  Mahzarin R. Banaji,et al.  Implicit Bias among Physicians and its Prediction of Thrombolysis Decisions for Black and White Patients , 2007, Journal of General Internal Medicine.

[31]  Jeremy Freese,et al.  The Demographic and Political Composition of Mechanical Turk Samples , 2016 .

[32]  Margrit Betke,et al.  ICORD: Intelligent Collection of Redundant Data — A Dynamic System for Crowdsourcing Cell Segmentations Accurately and Efficiently , 2016, 2016 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW).

[33]  Kate Starbird,et al.  How Information Snowballs: Exploring the Role of Exposure in Online Rumor Propagation , 2016, CSCW.

[34]  Anna R. Karlin,et al.  Using behavioral data to identify interviewer fabrication in surveys , 2013, CHI.

[35]  Brian A. Nosek,et al.  Understanding and using the implicit association test: I. An improved scoring algorithm. , 2003, Journal of personality and social psychology.

[36]  Curtis D. Hardin,et al.  The existence of implicit bias is beyond reasonable doubt: A refutation of ideological and methodological objections and executive summary of ten studies that no manager should ignore , 2009 .