SARB-DF: A Continual Learning Aided Framework for Deepfake Video Detection Using Self-Attention Residual Block

The creation and dissemination of deepfake videos have become increasingly prevalent nowadays, facilitated by advanced technological tools. These synthetic videos pose significant security challenges as they can spread misinformation and manipulation thereby undermining the digital media. Owing to t...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE access 2024, Vol.12, p.189088-189101
Hauptverfasser: Prathibha, P.G, Tamizharasan, P. S., Panthakkan, Alavikunhu, Mansoor, Wathiq, Al Ahmad, Hussain
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:The creation and dissemination of deepfake videos have become increasingly prevalent nowadays, facilitated by advanced technological tools. These synthetic videos pose significant security challenges as they can spread misinformation and manipulation thereby undermining the digital media. Owing to the continuous generation of novel synthetic data, deepfake detection models must be regularly updated to enhance their generalization capabilities. In this research article, we propose a deepfake video detection system with self-attention mechanism and continual learning. A self-attention residual module is specifically introduced to extract detailed facial features. We enable the deepfake detection process with continual learning to improve detection capability and improve generalization. The framework uses weight regularization and a dynamic sample set to continuously learn and adapt to new synthetic data. We demonstrated our proposed approach on Xception-Net backbone with benchmark datasets such as Celeb-DF and Face-Forensics++ datasets. Experimental results shows AUC values of 99.26% on Celeb-DF and 99.67%, 93.57%, 99.78% and 90.00% on different categories on FaceForensics++ such as Deepfakes, Face2Face, FaceSwap and Neural Textures respectively.
ISSN:2169-3536
DOI:10.1109/ACCESS.2024.3517170