Social sensing techniques were designed for analyzing unreliable data [1], but not explicitly built for adversarial generated and manipulated data. The adversarial use of social media to spread deceptive or misleading information poses a social, economic, and political threat [2]. Deceptive information spreads quickly and inexpensively online relative to traditional methods of dissemination (e.g., print, radio, and television). For example, bots (i.e., dedicated software for sharing text information [3]) can distribute information faster than humans. Such deceptive information is commonly referred to as fake (fabricated) news, which can be a form of propaganda (i.e., manipulation to advance a particular view or agenda). Information spread is particularly effective if the content resonates with the preconceptions and biases of social groups or communities because the spread will be reinforced by implied trust in information coming from other members (echo chambers and filter bubbles) [4]. We conjecture that the future of online deception, including fake news, will extend beyond text to high-quality, massproduced machine-generated and manipulated images, video, and audio [5].
[1]
Samuel C. Woolley,et al.
Computational propaganda worldwide: Executive summary
,
2017
.
[2]
Ira Kemelmacher-Shlizerman,et al.
Synthesizing Obama
,
2017,
ACM Trans. Graph..
[3]
Jaakko Lehtinen,et al.
Progressive Growing of GANs for Improved Quality, Stability, and Variation
,
2017,
ICLR.
[4]
Heng Ji,et al.
The Age of Social Sensing
,
2018,
Computer.