Audio-Visual Speech Codecs: Rethinking Audio-Visual Speech Enhancement by Re-Synthesis
Since facial actions such as lip movements contain significant information about speech content, it is not surprising that audio-visual speech enhancement methods are more accurate than their audio-only counterparts. Yet, state-of-the-art approaches still struggle to generate clean, realistic speech...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Since facial actions such as lip movements contain significant information
about speech content, it is not surprising that audio-visual speech enhancement
methods are more accurate than their audio-only counterparts. Yet,
state-of-the-art approaches still struggle to generate clean, realistic speech
without noise artifacts and unnatural distortions in challenging acoustic
environments. In this paper, we propose a novel audio-visual speech enhancement
framework for high-fidelity telecommunications in AR/VR. Our approach leverages
audio-visual speech cues to generate the codes of a neural speech codec,
enabling efficient synthesis of clean, realistic speech from noisy signals.
Given the importance of speaker-specific cues in speech, we focus on developing
personalized models that work well for individual speakers. We demonstrate the
efficacy of our approach on a new audio-visual speech dataset collected in an
unconstrained, large vocabulary setting, as well as existing audio-visual
datasets, outperforming speech enhancement baselines on both quantitative
metrics and human evaluation studies. Please see the supplemental video for
qualitative results at
https://github.com/facebookresearch/facestar/releases/download/paper_materials/video.mp4. |
---|---|
DOI: | 10.48550/arxiv.2203.17263 |