Multimedia Forensics

With the availability of powerful and easy-to-use media editing tools, falsifying images and videos has become widespread in the last few years. Coupled with ubiquitous social networks, this allows for the viral dissemination of fake news. This raises huge concerns on multimedia security. This scenario became even worse with the advent of deep learning. New, sophisticated methods have been proposed to accomplish manipulations that were previously unthinkable (e.g., deepfake). This tutorial will present the most reliable methods for detection of manipulated images and for source identification. These are important tools nowadays to carry out fact checking and authorship verification. Hence, this is a timely and relevant research topic in the multimedia security research community. The tutorial will be focused on digital integrity and source attribution with reference to both images and videos. For media authenticity the main techniques will be presented for forgery detection and localization, starting from methods that rely on camera-based and format-based artifacts. Then the most innovative solutions based on deep learning will be described, considering both supervised and unsupervised approaches. Results will be presented on challenging datasets and realistic scenarios, such as the spreading of manipulated images and videos over social networks. In addition, the robustness of such methods to adversarial attacks will be analyzed. The problem of image and video source attribution to the device used for its acquisition will be analyzed under different viewpoints: detecting the used kind of device (e.g., scanner vs. camera); detecting the used make and model (e.g., one model of camera vs. another model); detecting the used specific device (e.g., one device of a specific model vs. another device of the same model). State-of-the-art solutions exploiting either model-based or data-driven techniques will be presented. Results will be shown considering up-to-date standard datasets used for this topic.