The Social Impact of Deepfakes

In many ways, this special issue was inspired by a visit to the University of Washington in 2018. Seitz and his colleagues had just published the algorithms that enabled their now famous Obama video, in which a few hours of simple audio clips could drive a high-quality video lip syncing. At the end of the video, a young Obama audio clip is parroted perfectly by a video version of Obama who is twice his age. This is likely the most canonical, if not the original ‘‘deepfake’’ video. It is enabled by machine learning, which uses multiple videos as a training set to categorize speech into ‘‘mouth shapes,’’ which are then integrated into an existing target video. The outcome is a stunningly real video that few would give a second glance to—it simply looks like President Obama talking. Aside from the realism of the videos, there were two striking things about Seitz’s presentation. First, the algorithms that build deepfakes are easier to build than detect, based on the very nature of the Generative Adversarial Networks employed according to Goodfellow, these models are constructed by pitting ‘‘counterfeiters’’ against ‘‘police,’’ and successful models by definition have already shown that the fake can beat detection methods. Indeed, since deepfakes have migrated from top computer science laboratories to cheap software platforms all over the world, researchers are also focusing on defensive algorithms that could detect the deception (see Tolosana et al., for a recent review). But Seitz was not confident about this strategy, and likened the spiral of deception and detection with an arms race, with the algorithms that deceive having the early advantage compared with those that detect. The second eye opener was the many social and psychological questions that these deepfakes raised: does exposure to deepfakes undermine trust in the media? How might deepfakes be used during social interactions? Are there strategies for debunking or countering deepfakes? There has been ample work done in computer science on automatic generation and detection of deepfakes, but to date there have only been a handful of social scientists who have examined the social impact of the technology. It is time to understand the possible effects deepfakes might have on people, and how psychological and media theories apply.

[1]  Doris A. Graber,et al.  Seeing is remembering: How visuals contribute to learning from television news , 1990 .

[2]  Kathryn Y. Segovia,et al.  Virtually True: Children's Acquisition of False Memories in Virtual Reality , 2009 .

[3]  M. Garry,et al.  Actually, a picture is worth less than 45 words: Narratives produce more false memories than photographs do , 2005, Psychonomic bulletin & review.

[4]  M. Prior,et al.  Visual Political Knowledge: A Different Road to Competence? , 2013, The Journal of Politics.

[5]  J. Bailenson,et al.  Virtual Self-Modeling: The Effects of Vicarious Reinforcement and Identification on Exercise Behaviors , 2009 .

[6]  Jeffrey T. Hancock,et al.  See No Evil: The Effect of Communication Medium and Motivation on Deception Detection , 2010 .

[7]  Jeffrey T. Hancock,et al.  How Advertorials Deactivate Advertising Schema: MTurk-Based Experiments to Examine Persuasion Tactics and Outcomes in Health Advertisements , 2017, Commun. Res..

[8]  D. Greenbaum,et al.  Deep Fakes and Memory Malleability: False Memories in the Service of Fake News , 2020, AJOB neuroscience.

[9]  B. Depaulo,et al.  Accuracy of Deception Judgments , 2006, Personality and social psychology review : an official journal of the Society for Personality and Social Psychology, Inc.

[10]  N. Helberger,et al.  Do (Microtargeted) Deepfakes Have Real Effects on Political Attitudes , 2020 .

[11]  Soo Youn Oh,et al.  Let the Avatar Brighten Your Smile: Effects of Enhancing Facial Expressions in Virtual Environments , 2016, PloS one.

[12]  T. Levine Duped: Truth-Default Theory and the Social Science of Lying and Deception , 2019 .

[13]  J. Bailenson,et al.  Self-Endorsed Advertisements: When the Self Persuades the Self , 2014 .

[15]  M. Posner,et al.  Visual dominance: an information-processing account of its origins and significance. , 1976, Psychological review.

[16]  Mor Naaman,et al.  AI-Mediated Communication: Definition, Research Agenda, and Ethical Considerations , 2020, J. Comput. Mediat. Commun..

[17]  Aythami Morales,et al.  DeepFakes and Beyond: A Survey of Face Manipulation and Fake Detection , 2020, Inf. Fusion.

[18]  Saifuddin Ahmed,et al.  Who inadvertently shares deepfakes? Analyzing the role of political interest, cognitive ability, and social network size , 2020, Telematics Informatics.

[19]  Charles Spence,et al.  Seeing the light: exploring the Colavita visual dominance effect , 2007, Experimental Brain Research.

[20]  C.D. Martin,et al.  The Media Equation: How People Treat Computers, Television and New Media Like Real People and Places [Book Review] , 1997, IEEE Spectrum.

[21]  Andrew Chadwick,et al.  Deepfakes and Disinformation: Exploring the Impact of Synthetic Political Video on Deception, Uncertainty, and Trust in News , 2020, Social Media + Society.

[22]  S. Sundar The MAIN Model : A Heuristic Approach to Understanding Technology Effects on Credibility , 2007 .