Targeted Augmented Data for Audio Deepfake Detection
The availability of highly convincing audio deepfake generators highlights the need for designing robust audio deepfake detectors. Existing works often rely solely on real and fake data available in the training set, which may lead to overfitting, thereby reducing the robustness to unseen manipulati...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The availability of highly convincing audio deepfake generators highlights
the need for designing robust audio deepfake detectors. Existing works often
rely solely on real and fake data available in the training set, which may lead
to overfitting, thereby reducing the robustness to unseen manipulations. To
enhance the generalization capabilities of audio deepfake detectors, we propose
a novel augmentation method for generating audio pseudo-fakes targeting the
decision boundary of the model. Inspired by adversarial attacks, we perturb
original real data to synthesize pseudo-fakes with ambiguous prediction
probabilities. Comprehensive experiments on two well-known architectures
demonstrate that the proposed augmentation contributes to improving the
generalization capabilities of these architectures. |
---|---|
DOI: | 10.48550/arxiv.2407.07598 |