Taking Back Control of Social Media Feeds with Take Back Control

Controlling the quality of social media feeds poses an issue for many users. Platforms such as Twitter give users some options to influence their feeds. Still, the selection of content predominantly relies on implicit rather than explicit user actions, as manual options for "cleaning the feed" are often cumbersome and difficult to use for most users. Here, we present Take Back Control, a web browser extension that gives users control to hide undesirable content from their social media feeds. The extension combines JavaScript (for hiding the content) and machine learning (for deciding what content to hide). Our current demonstration includes three filter types: Toxic, Political, and Negative content, with a possibility to add more filters, all of this with the overarching aim of helping end users control the information visible in their social media feeds.

[1]  Guido Caldarelli,et al.  Echo Chambers: Emotional Contagion and Group Polarization on Facebook , 2016, Scientific Reports.

[2]  Adam Tauman Kalai,et al.  Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings , 2016, NIPS.

[3]  Michael J. Paul,et al.  Discovering Health Topics in Social Media Using Topic Models , 2014, PloS one.

[4]  Steven Reece,et al.  Language Understanding in the Wild: Combining Crowdsourcing and Machine Learning , 2015, WWW.

[5]  Bernard J. Jansen,et al.  Anatomy of Online Hate: Developing a Taxonomy and Machine Learning Models for Identifying and Classifying Hate in Online News Media , 2018, ICWSM.

[6]  Thomas Wolf,et al.  DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter , 2019, ArXiv.

[7]  Jeremy P. Birnholtz,et al.  "Algorithms ruin everything": #RIPTwitter, Folk Theories, and Resistance to Algorithmic Change in Social Media , 2017, CHI.

[8]  Ichiro Takeuchi,et al.  Safe Grid Search with Optimal Complexity , 2018, ICML.

[9]  Bernard J. Jansen,et al.  Analyzing Demographic Bias in Artificially Generated Facial Pictures , 2020, CHI Extended Abstracts.

[10]  Karrie Karahalios,et al.  "Be Careful; Things Can Be Worse than They Appear": Understanding Biased Algorithms and Users' Behavior Around Them in Rating Platforms , 2017, ICWSM.

[11]  Kaitlin M. Flannery,et al.  Social media use and anxiety in emerging adults. , 2017, Journal of affective disorders.

[12]  Anne Marie Piper,et al.  Addressing Age-Related Bias in Sentiment Analysis , 2018, CHI.

[13]  Peerayuth Charoensukmongkol,et al.  Face it, don't Facebook it: Impacts of Social Media Addiction on Mindfulness, Coping Strategies and the Consequence on Emotional Exhaustion. , 2016, Stress and health : journal of the International Society for the Investigation of Stress.

[14]  Hong Wen,et al.  Effect of anger, anxiety, and sadness on the propagation scale of social media posts after natural disasters , 2020, Inf. Process. Manag..

[15]  Francesco Bonchi,et al.  Algorithmic Bias: From Discrimination Discovery to Fairness-aware Data Mining , 2016, KDD.

[16]  Ming-Wei Chang,et al.  BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.

[17]  Mark Dredze,et al.  Discovering Shifts to Suicidal Ideation from Mental Health Content in Social Media , 2016, CHI.

[18]  Pamela J. Wisniewski,et al.  Resilience Mitigates the Negative Effects of Adolescent Internet Addiction and Online Risk Exposure , 2015, CHI.

[19]  Joshua A. Tucker,et al.  How Social Media Facilitates Political Protest: Information, Motivation, and Social Networks , 2018, Political Psychology.

[20]  Thomas Wolf,et al.  HuggingFace's Transformers: State-of-the-art Natural Language Processing , 2019, ArXiv.

[21]  Rui Fan,et al.  Anger Is More Influential than Joy: Sentiment Correlation in Weibo , 2013, PloS one.

[22]  Jason B. Colditz,et al.  Use of multiple social media platforms and symptoms of depression and anxiety: A nationally-representative study among U.S. young adults , 2017, Comput. Hum. Behav..

[23]  Leon Derczynski,et al.  Directions in Abusive Language Training Data: Garbage In, Garbage Out , 2020, ArXiv.

[24]  William Stafford Noble,et al.  Support vector machine , 2013 .

[25]  Annika Wærn,et al.  Towards Algorithmic Experience: Initial Efforts for Social Media Contexts , 2018, CHI.

[26]  Bernard J. Jansen,et al.  Classifying online corporate reputation with machine learning: a study in the banking domain , 2019, Internet Res..

[27]  Sérgio Nunes,et al.  A Survey on Automatic Detection of Hate Speech in Text , 2018, ACM Comput. Surv..

[28]  Henriette Cramer,et al.  Effects of Ad Quality & Content-Relevance on Perceived Content Quality , 2015, CHI.

[29]  Q. Liao,et al.  Questioning the AI: Informing Design Practices for Explainable AI User Experiences , 2020, CHI.

[30]  Liyana Shuib,et al.  Social Media Recommender Systems: Review and Open Research Issues , 2018, IEEE Access.

[31]  Roger Altizer,et al.  See No Evil, Hear No Evil, Speak No Evil: How Collegiate Players Define, Experience and Cope with Toxicity , 2020, CHI.

[32]  Douglas W. Oard,et al.  Implicit Feedback for Recommender Systems , 1998 .

[33]  Bernard J. Jansen,et al.  Online Hate Interpretation Varies by Country, But More by Individual: A Statistical Analysis Using Crowdsourced Ratings , 2018, 2018 Fifth International Conference on Social Networks Analysis, Management and Security (SNAMS).

[34]  Adrian Holzer,et al.  Designing for Digital Detox: Making Social Media Less Addictive with Digital Nudges , 2020, CHI Extended Abstracts.

[35]  Amandeep Dhir,et al.  Online social media fatigue and psychological wellbeing - A study of compulsive use, fear of missing out, fatigue, anxiety and depression , 2018, Int. J. Inf. Manag..

[36]  Seungyeop Han,et al.  Exploring Cyberbullying and Other Toxic Behavior in Team Competition Online Games , 2015, CHI.

[37]  Víctor M. González,et al.  Why do i keep interrupting myself?: environment, habit and self-interruption , 2011, CHI.

[38]  Antonio Krüger,et al.  Eating Ads With a Monster: Introducing a Gamified Ad Blocker , 2019, CHI Extended Abstracts.

[39]  Antonio Krüger,et al.  Gamified Ads: Bridging the Gap Between User Enjoyment and the Effectiveness of Online Ads , 2019, CHI.

[40]  A. Zipf,et al.  Research on social media feeds – A GIScience perspective , 2016 .

[41]  Patrick Olivier,et al.  Digital Civics: Citizen Empowerment With and Through Technology , 2016, CHI Extended Abstracts.

[42]  Karrie Karahalios,et al.  Awareness, Navigation, and Use of Feed Control Settings Online , 2020, CHI.

[43]  JawaheerGawesh,et al.  Modeling User Preferences in Recommender Systems , 2014 .

[44]  H. Woods,et al.  #Sleepyteens: Social media use in adolescence is associated with poor sleep quality, anxiety, depression and low self-esteem. , 2016, Journal of adolescence.

[45]  Yunfeng Zhang,et al.  Introduction to Explainable AI , 2020, CHI Extended Abstracts.

[46]  Ingmar Weber,et al.  Racial Bias in Hate Speech and Abusive Language Detection Datasets , 2019, Proceedings of the Third Workshop on Abusive Language Online.

[47]  Melita Zajc,et al.  The Social Media Dispositive and Monetization of User-Generated Content , 2015, Inf. Soc..

[48]  Ingmar Weber,et al.  Automated Hate Speech Detection and the Problem of Offensive Language , 2017, ICWSM.

[49]  Karrie Karahalios,et al.  The Illusion of Control: Placebo Effects of Control Settings , 2018, CHI.