End-to-End Multi-Person Audio/Visual Automatic Speech Recognition
Traditionally, audio-visual automatic speech recognition has been studied under the assumption that the speaking face on the visual signal is the face matching the audio. However, in a more realistic setting, when multiple faces are potentially on screen one needs to decide which face to feed to the...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Traditionally, audio-visual automatic speech recognition has been studied
under the assumption that the speaking face on the visual signal is the face
matching the audio. However, in a more realistic setting, when multiple faces
are potentially on screen one needs to decide which face to feed to the A/V ASR
system. The present work takes the recent progress of A/V ASR one step further
and considers the scenario where multiple people are simultaneously on screen
(multi-person A/V ASR). We propose a fully differentiable A/V ASR model that is
able to handle multiple face tracks in a video. Instead of relying on two
separate models for speaker face selection and audio-visual ASR on a single
face track, we introduce an attention layer to the ASR encoder that is able to
soft-select the appropriate face video track. Experiments carried out on an A/V
system trained on over 30k hours of YouTube videos illustrate that the proposed
approach can automatically select the proper face tracks with minor WER
degradation compared to an oracle selection of the speaking face while still
showing benefits of employing the visual signal instead of the audio alone. |
---|---|
DOI: | 10.48550/arxiv.2205.05586 |