Facetron: A Multi-speaker Face-to-Speech Model based on Cross-modal Latent Representations
In this paper, we propose a multi-speaker face-to-speech waveform generation model that also works for unseen speaker conditions. Using a generative adversarial network (GAN) with linguistic and speaker characteristic features as auxiliary conditions, our method directly converts face images into sp...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In this paper, we propose a multi-speaker face-to-speech waveform generation
model that also works for unseen speaker conditions. Using a generative
adversarial network (GAN) with linguistic and speaker characteristic features
as auxiliary conditions, our method directly converts face images into speech
waveforms under an end-to-end training framework. The linguistic features are
extracted from lip movements using a lip-reading model, and the speaker
characteristic features are predicted from face images using cross-modal
learning with a pre-trained acoustic model. Since these two features are
uncorrelated and controlled independently, we can flexibly synthesize speech
waveforms whose speaker characteristics vary depending on the input face
images. We show the superiority of our proposed model over conventional methods
in terms of objective and subjective evaluation results. Specifically, we
evaluate the performances of linguistic features by measuring their accuracy on
an automatic speech recognition task. In addition, we estimate speaker and
gender similarity for multi-speaker and unseen conditions, respectively. We
also evaluate the aturalness of the synthesized speech waveforms using a mean
opinion score (MOS) test and non-intrusive objective speech quality assessment
(NISQA).The demo samples of the proposed and other models are available at
https://sam-0927.github.io/ |
---|---|
DOI: | 10.48550/arxiv.2107.12003 |