A robust audio deepfake detection system via multi-view feature
With the advancement of generative modeling techniques, synthetic human speech becomes increasingly indistinguishable from real, and tricky challenges are elicited for the audio deepfake detection (ADD) system. In this paper, we exploit audio features to improve the generalizability of ADD systems....
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | With the advancement of generative modeling techniques, synthetic human
speech becomes increasingly indistinguishable from real, and tricky challenges
are elicited for the audio deepfake detection (ADD) system. In this paper, we
exploit audio features to improve the generalizability of ADD systems.
Investigation of the ADD task performance is conducted over a broad range of
audio features, including various handcrafted features and learning-based
features. Experiments show that learning-based audio features pretrained on a
large amount of data generalize better than hand-crafted features on
out-of-domain scenarios. Subsequently, we further improve the generalizability
of the ADD system using proposed multi-feature approaches to incorporate
complimentary information from features of different views. The model trained
on ASV2019 data achieves an equal error rate of 24.27\% on the In-the-Wild
dataset. |
---|---|
DOI: | 10.48550/arxiv.2403.01960 |