XLAVS-R: Cross-Lingual Audio-Visual Speech Representation Learning for Noise-Robust Speech Perception
Speech recognition and translation systems perform poorly on noisy inputs, which are frequent in realistic environments. Augmenting these systems with visual signals has the potential to improve robustness to noise. However, audio-visual (AV) data is only available in limited amounts and for fewer l...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Speech recognition and translation systems perform poorly on noisy inputs,
which are frequent in realistic environments. Augmenting these systems with
visual signals has the potential to improve robustness to noise. However,
audio-visual (AV) data is only available in limited amounts and for fewer
languages than audio-only resources. To address this gap, we present XLAVS-R, a
cross-lingual audio-visual speech representation model for noise-robust speech
recognition and translation in over 100 languages. It is designed to maximize
the benefits of limited multilingual AV pre-training data, by building on top
of audio-only multilingual pre-training and simplifying existing pre-training
schemes. Extensive evaluation on the MuAViC benchmark shows the strength of
XLAVS-R on downstream audio-visual speech recognition and translation tasks,
where it outperforms the previous state of the art by up to 18.5% WER and 4.7
BLEU given noisy AV inputs, and enables strong zero-shot audio-visual ability
with audio-only fine-tuning. |
---|---|
DOI: | 10.48550/arxiv.2403.14402 |