Agents for Fighting Misinformation Spread on Twitter: Design Challenges

Containing misinformation spread on social media has been acknowledged as a great socio-technical challenge in the last years. Despite advances, practical and timely solutions to properly communicate verified (mis)information to social media users are an evidenced need. We introduce a multi-agent approach to bridge Twitter users with fact-checked information. First, a social bot, which nudges users sharing verified misinformation, and a conversational agent that verifies if there is a reputable fact-check available and explains existing assessments in natural language. Both agents share the same requirements of evoking trust and being perceived by Twitter users as an opportunity to build their media literacy. To this end, two preliminary human-centred studies are presented, the first one looking for an adequate identity for the bot, and the second for understanding preferences for credibility indicators when explaining the assessment of misinformation. The results indicate what this design research should pursue to create agents that are consistent in their presentation, friendly, engaging, and credible.

[1]  Sinan Aral,et al.  The spread of true and false news online , 2018, Science.

[2]  Cameron Marlow,et al.  A 61-million-person experiment in social influence and political mobilization , 2012, Nature.

[3]  Serge Demeyer,et al.  Among the Machines: Human-Bot Interaction on Social Q&A Websites , 2016, CHI Extended Abstracts.

[4]  Kyumin Lee,et al.  The Rise of Guardians: Fact-checking URL Recommendation to Combat Fake News , 2018, SIGIR.

[5]  Weiyan Shi,et al.  Effects of Persuasive Dialogues: Testing Bot Identities and Inquiry Strategies , 2020, CHI.

[6]  Evangelos Karapanos,et al.  Challenging Misinformation: Exploring Limits and Approaches , 2019, INTERACT.

[7]  Mabrook S. Al-Rakhami,et al.  Lies Kill, Facts Save: Detecting COVID-19 Misinformation in Twitter , 2020, IEEE Access.

[8]  Evangelos Karapanos,et al.  23 Ways to Nudge: A Review of Technology-Mediated Nudging in Human-Computer Interaction , 2019, CHI.

[9]  Anja Bechmann,et al.  Performance Analysis of Fact-Checking Organisations and Initiatives in Europe: A Critical Overview of Online Platforms Fighting Fake News , 2020, Disinformation and Digital Media as a Challenge for Democracy.

[10]  Harith Alani,et al.  Co-spread of Misinformation and Fact-Checking Content During the Covid-19 Pandemic , 2020, SocInfo.

[11]  Ingmar Weber,et al.  Get Back! You Don't Know Me Like That: The Social Mediation of Fact Checking Interventions in Twitter Conversations , 2014, ICWSM.

[12]  Kyumin Lee,et al.  Seven Months with the Devils: A Long-Term Study of Content Polluters on Twitter , 2011, ICWSM.

[13]  A. L. Schmidt,et al.  The COVID-19 social media infodemic , 2020, Scientific Reports.

[14]  Michael J. Paul,et al.  Identifying Protective Health Behaviors on Twitter: Observational Study of Travel Advisories and Zika Virus , 2018, Journal of Medical Internet Research.

[15]  L. Bode,et al.  See Something, Say Something: Correction of Global Health Misinformation on Social Media , 2018, Health communication.

[16]  Giovanni Luca Ciampaglia,et al.  Fighting fake news: a role for computational social science in the fight against digital misinformation , 2017, Journal of Computational Social Science.

[17]  Nasir Memon,et al.  Effects of Credibility Indicators on Social Media News Sharing Intent , 2020, CHI.

[18]  Harith Alani,et al.  Misinformation : Challenges and Future Directions Conference or Workshop Item , 2018 .

[19]  Chris Parnin,et al.  Sorry to Bother You: Designing Bots for Effective Recommendations , 2019, 2019 IEEE/ACM 1st International Workshop on Bots in Software Engineering (BotSE).

[20]  David R. Karger,et al.  A Structured Response to Misinformation: Defining and Annotating Credibility Indicators in News Articles , 2018, WWW.

[21]  Larry E. Wood,et al.  Card sorting: current practices and beyond , 2008 .

[22]  Harith Alani,et al.  News Source Credibility in the Eyes of Different Assessors , 2019, TTO.

[23]  Nicholas Berente,et al.  Is that social bot behaving unethically? , 2017, CACM.

[24]  Roberto Di Pietro,et al.  The Paradigm-Shift of Social Spambots: Evidence, Theories, and Tools for the Arms Race , 2017, WWW.

[25]  Konstantin Beznosov,et al.  The socialbot network: when bots socialize for fame and money , 2011, ACSAC '11.