Detecting Audio-Visual Deepfakes with Fine-Grained Inconsistencies
Existing methods on audio-visual deepfake detection mainly focus on high-level features for modeling inconsistencies between audio and visual data. As a result, these approaches usually overlook finer audio-visual artifacts, which are inherent to deepfakes. Herein, we propose the introduction of fin...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Existing methods on audio-visual deepfake detection mainly focus on
high-level features for modeling inconsistencies between audio and visual data.
As a result, these approaches usually overlook finer audio-visual artifacts,
which are inherent to deepfakes. Herein, we propose the introduction of
fine-grained mechanisms for detecting subtle artifacts in both spatial and
temporal domains. First, we introduce a local audio-visual model capable of
capturing small spatial regions that are prone to inconsistencies with audio.
For that purpose, a fine-grained mechanism based on a spatially-local distance
coupled with an attention module is adopted. Second, we introduce a
temporally-local pseudo-fake augmentation to include samples incorporating
subtle temporal inconsistencies in our training set. Experiments on the DFDC
and the FakeAVCeleb datasets demonstrate the superiority of the proposed method
in terms of generalization as compared to the state-of-the-art under both
in-dataset and cross-dataset settings. |
---|---|
DOI: | 10.48550/arxiv.2408.06753 |