Perception vs. Reality: Understanding and Evaluating the Impact of Synthetic Image Deepfakes over College Students

Artificial Intelligence (AI)-powered Deepfakes are responsible for new challenges in consumers’ visual experience, and pose a wide range of negative consequences (i.e., non-consensual intimate imagery, political dis/misinformation, financial fraud, and cybersecurity issues) for individuals, societies, and organizations. Research suggested legislation, corporate policies, anti-Deepfake technology, education, and training to combat Deepfakes including the usage of synthetic media to raise awareness so that people can become more critical in detection when evaluating these contents in the future. To educate and raise awareness among the college-going students, this pilot survey study utilized both synthetic and real images over undergraduate students (N=19) to understand the human cognition and perception demonstrated by the literate population in detecting Deepfake media with their bare eyes. The results showed that human cognition and perception are insufficient in detecting synthetic media with their inexperienced eyes and even the intelligent population is vulnerable to this technology. While Deepfakes are becoming sophisticated and imperceptible, it was observed that this kind of survey study can be beneficial in raising awareness among the population about the societal impact of the technology and may also improve their detection ability for future encounters.

[1]  Konstantin Böttinger,et al.  Human Perception of Audio Deepfakes , 2022, DDAM@MM.

[2]  Sophie J. Nightingale,et al.  AI-synthesized faces are indistinguishable from real faces and more trustworthy , 2022, Proceedings of the National Academy of Sciences.

[3]  Ivan Soraperra,et al.  Fooled twice: People cannot detect deepfakes but think they can , 2021, iScience.

[4]  Muhammad Adeel Zaffar,et al.  Seeing is Believing: Exploring Perceptual Differences in DeepFake Videos , 2021, CHI.

[5]  Jeffrey T. Hancock,et al.  The Social Impact of Deepfakes , 2021, Cyberpsychology Behav. Soc. Netw..

[6]  Yoori Hwang,et al.  Effects of Disinformation Using Deepfake: The Protective Effect of Media Literacy Education , 2021, Cyberpsychology Behav. Soc. Netw..

[7]  Iyad Rahwan,et al.  Human detection of machine-manipulated media , 2019, Commun. ACM.

[8]  Shadrack Awah Buo The Emerging Threats of Deepfake Attacks and Countermeasures , 2020, ArXiv.

[9]  Kathleen M. Carley Social cybersecurity: an emerging science , 2020, Computational and Mathematical Organization Theory.

[10]  Daiheng Gao,et al.  DeepFaceLab: A simple, flexible and extensible face swapping framework , 2020, ArXiv.

[11]  Tero Karras,et al.  Analyzing and Improving the Image Quality of StyleGAN , 2019, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[12]  Nassir Navab,et al.  GANs for Medical Image Analysis , 2018, Artif. Intell. Medicine.

[13]  HancockJeff Call for Special Issue Papers: The Social Impact of Deep Fakes , 2019 .

[14]  Andreas Rössler,et al.  FaceForensics++: Learning to Detect Manipulated Facial Images , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).

[15]  Alexei A. Efros,et al.  Everybody Dance Now , 2018, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).

[16]  M. Westerlund The Emergence of Deepfake Technology: A Review , 2019, Technology Innovation Management Review.

[17]  Robert M. Chesney,et al.  21st Century-Style Truth Decay: Deep Fakes and the Challenge for Privacy, Free Expression, and National Security , 2019 .

[18]  Marie-Helen Maras,et al.  Determining authenticity of video evidence in the age of artificial intelligence and in the wake of Deepfake videos , 2018, The International Journal of Evidence & Proof.

[19]  Tom White,et al.  Generative Adversarial Networks: An Overview , 2017, IEEE Signal Processing Magazine.

[20]  Annie Lang,et al.  The limited capacity model of mediated message processing , 2000 .