A Light Weight Model for Active Speaker Detection
Active speaker detection is a challenging task in audio-visual scenario understanding, which aims to detect who is speaking in one or more speakers scenarios. This task has received extensive attention as it is crucial in applications such as speaker diarization, speaker tracking, and automatic vide...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Active speaker detection is a challenging task in audio-visual scenario
understanding, which aims to detect who is speaking in one or more speakers
scenarios. This task has received extensive attention as it is crucial in
applications such as speaker diarization, speaker tracking, and automatic video
editing. The existing studies try to improve performance by inputting multiple
candidate information and designing complex models. Although these methods
achieved outstanding performance, their high consumption of memory and
computational power make them difficult to be applied in resource-limited
scenarios. Therefore, we construct a lightweight active speaker detection
architecture by reducing input candidates, splitting 2D and 3D convolutions for
audio-visual feature extraction, and applying gated recurrent unit (GRU) with
low computational complexity for cross-modal modeling. Experimental results on
the AVA-ActiveSpeaker dataset show that our framework achieves competitive mAP
performance (94.1% vs. 94.2%), while the resource costs are significantly lower
than the state-of-the-art method, especially in model parameters (1.0M vs.
22.5M, about 23x) and FLOPs (0.6G vs. 2.6G, about 4x). In addition, our
framework also performs well on the Columbia dataset showing good robustness.
The code and model weights are available at
https://github.com/Junhua-Liao/Light-ASD. |
---|---|
DOI: | 10.48550/arxiv.2303.04439 |