The Future of Deception: Machine-Generated and Manipulated Images, Video, and Audio?

Social sensing techniques were designed for analyzing unreliable data [1], but not explicitly built for adversarial generated and manipulated data. The adversarial use of social media to spread deceptive or misleading information poses a social, economic, and political threat [2]. Deceptive information spreads quickly and inexpensively online relative to traditional methods of dissemination (e.g., print, radio, and television). For example, bots (i.e., dedicated software for sharing text information [3]) can distribute information faster than humans. Such deceptive information is commonly referred to as fake (fabricated) news, which can be a form of propaganda (i.e., manipulation to advance a particular view or agenda). Information spread is particularly effective if the content resonates with the preconceptions and biases of social groups or communities because the spread will be reinforced by implied trust in information coming from other members (echo chambers and filter bubbles) [4]. We conjecture that the future of online deception, including fake news, will extend beyond text to high-quality, massproduced machine-generated and manipulated images, video, and audio [5].