Skip to page navigation menu Skip entire header
Brown University
Skip 13 subheader links

Processed Landmarks

Description

Abstract:
Deepfakes can spread misinformation, defamation, and propaganda by faking videos of public speakers. We assume that future deepfakes will be visually indistinguishable from real video, and will also fool current deepfake detection methods. As such, we posit a social verification system that instead validates the truth of an event via a set of videos. To confirm which, if any, videos are being faked at any point in time, we check for consistent facial geometry across videos. We demonstrate that by comparing mouth movement across views using a combination of PCA and hierarchical clustering, we can detect a deepfake with subtle mouth manipulations out of a set of six videos at high accuracy. Using our new multi-view dataset of 25 speakers, we show that our performance gracefully decays as we increase the number of identically faked videos from different input views.

Access Conditions

Use and Reproduction
This work is licensed under a GNU GPL3 License

Citation

Tursman, Eleanor, George, Marilyn, Kamara, Seny, et al., "Processed Landmarks" (2020). Brown University Open Data Collection, Dataset and codebase for Towards Untrusted Social Video Verification to Combat Deepfakes via Face Geometry Consistency. Brown Digital Repository. Brown University Library. https://doi.org/10.26300/kyyg-w765

Relations

Collections: