Speech2Lip: High-fidelity Speech to Lip Generation by Learning from a Short Video

Synthesizing realistic videos according to a given speech is still an open challenge. Previous works have been plagued by issues such as inaccurate lip shape generation and poor image quality. The key reason is that only motions and appearances on limited facial areas (e.g., lip area) are mainly dri...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Wu, Xiuzhe, Hu, Pengfei, Wu, Yang, Lyu, Xiaoyang, Cao, Yan-Pei, Shan, Ying, Yang, Wenming, Sun, Zhongqian, Qi, Xiaojuan
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Wu, Xiuzhe
Hu, Pengfei
Wu, Yang
Lyu, Xiaoyang
Cao, Yan-Pei
Shan, Ying
Yang, Wenming
Sun, Zhongqian
Qi, Xiaojuan
description Synthesizing realistic videos according to a given speech is still an open challenge. Previous works have been plagued by issues such as inaccurate lip shape generation and poor image quality. The key reason is that only motions and appearances on limited facial areas (e.g., lip area) are mainly driven by the input speech. Therefore, directly learning a mapping function from speech to the entire head image is prone to ambiguity, particularly when using a short video for training. We thus propose a decomposition-synthesis-composition framework named Speech to Lip (Speech2Lip) that disentangles speech-sensitive and speech-insensitive motion/appearance to facilitate effective learning from limited training data, resulting in the generation of natural-looking videos. First, given a fixed head pose (i.e., canonical space), we present a speech-driven implicit model for lip image generation which concentrates on learning speech-sensitive motion and appearance. Next, to model the major speech-insensitive motion (i.e., head movement), we introduce a geometry-aware mutual explicit mapping (GAMEM) module that establishes geometric mappings between different head poses. This allows us to paste generated lip images at the canonical space onto head images with arbitrary poses and synthesize talking videos with natural head movements. In addition, a Blend-Net and a contrastive sync loss are introduced to enhance the overall synthesis performance. Quantitative and qualitative results on three benchmarks demonstrate that our model can be trained by a video of just a few minutes in length and achieve state-of-the-art performance in both visual quality and speech-visual synchronization. Code: https://github.com/CVMI-Lab/Speech2Lip.
doi_str_mv 10.48550/arxiv.2309.04814
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2309_04814</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2309_04814</sourcerecordid><originalsourceid>FETCH-LOGICAL-a674-8f225093cfed8a09947546fa535e8799d1589673ae4799cc329035a6271206d93</originalsourceid><addsrcrecordid>eNotj81Kw0AUhWfjQqoP4Mr7AomT-R93UrQVAiItbsM1udMMtJkwBjFvb2xdHQ4f58DH2F3FS-W05g-Yf-J3KST3JVeuUtfsfTcStb2o4_gI23joixA7OsZphguBKcECYUMDZZxiGuBzhpowD3E4QMjpBAi7PuUJPpZpumFXAY9fdPufK7Z_ed6vt0X9tnldP9UFGqsKF4TQ3Ms2UOeQe6-sViaglpqc9b6rtPPGSiS1tLaVwnOp0QhbCW46L1fs_nJ7dmrGHE-Y5-bPrTm7yV93wUdr</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Speech2Lip: High-fidelity Speech to Lip Generation by Learning from a Short Video</title><source>arXiv.org</source><creator>Wu, Xiuzhe ; Hu, Pengfei ; Wu, Yang ; Lyu, Xiaoyang ; Cao, Yan-Pei ; Shan, Ying ; Yang, Wenming ; Sun, Zhongqian ; Qi, Xiaojuan</creator><creatorcontrib>Wu, Xiuzhe ; Hu, Pengfei ; Wu, Yang ; Lyu, Xiaoyang ; Cao, Yan-Pei ; Shan, Ying ; Yang, Wenming ; Sun, Zhongqian ; Qi, Xiaojuan</creatorcontrib><description>Synthesizing realistic videos according to a given speech is still an open challenge. Previous works have been plagued by issues such as inaccurate lip shape generation and poor image quality. The key reason is that only motions and appearances on limited facial areas (e.g., lip area) are mainly driven by the input speech. Therefore, directly learning a mapping function from speech to the entire head image is prone to ambiguity, particularly when using a short video for training. We thus propose a decomposition-synthesis-composition framework named Speech to Lip (Speech2Lip) that disentangles speech-sensitive and speech-insensitive motion/appearance to facilitate effective learning from limited training data, resulting in the generation of natural-looking videos. First, given a fixed head pose (i.e., canonical space), we present a speech-driven implicit model for lip image generation which concentrates on learning speech-sensitive motion and appearance. Next, to model the major speech-insensitive motion (i.e., head movement), we introduce a geometry-aware mutual explicit mapping (GAMEM) module that establishes geometric mappings between different head poses. This allows us to paste generated lip images at the canonical space onto head images with arbitrary poses and synthesize talking videos with natural head movements. In addition, a Blend-Net and a contrastive sync loss are introduced to enhance the overall synthesis performance. Quantitative and qualitative results on three benchmarks demonstrate that our model can be trained by a video of just a few minutes in length and achieve state-of-the-art performance in both visual quality and speech-visual synchronization. Code: https://github.com/CVMI-Lab/Speech2Lip.</description><identifier>DOI: 10.48550/arxiv.2309.04814</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition</subject><creationdate>2023-09</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2309.04814$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2309.04814$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Wu, Xiuzhe</creatorcontrib><creatorcontrib>Hu, Pengfei</creatorcontrib><creatorcontrib>Wu, Yang</creatorcontrib><creatorcontrib>Lyu, Xiaoyang</creatorcontrib><creatorcontrib>Cao, Yan-Pei</creatorcontrib><creatorcontrib>Shan, Ying</creatorcontrib><creatorcontrib>Yang, Wenming</creatorcontrib><creatorcontrib>Sun, Zhongqian</creatorcontrib><creatorcontrib>Qi, Xiaojuan</creatorcontrib><title>Speech2Lip: High-fidelity Speech to Lip Generation by Learning from a Short Video</title><description>Synthesizing realistic videos according to a given speech is still an open challenge. Previous works have been plagued by issues such as inaccurate lip shape generation and poor image quality. The key reason is that only motions and appearances on limited facial areas (e.g., lip area) are mainly driven by the input speech. Therefore, directly learning a mapping function from speech to the entire head image is prone to ambiguity, particularly when using a short video for training. We thus propose a decomposition-synthesis-composition framework named Speech to Lip (Speech2Lip) that disentangles speech-sensitive and speech-insensitive motion/appearance to facilitate effective learning from limited training data, resulting in the generation of natural-looking videos. First, given a fixed head pose (i.e., canonical space), we present a speech-driven implicit model for lip image generation which concentrates on learning speech-sensitive motion and appearance. Next, to model the major speech-insensitive motion (i.e., head movement), we introduce a geometry-aware mutual explicit mapping (GAMEM) module that establishes geometric mappings between different head poses. This allows us to paste generated lip images at the canonical space onto head images with arbitrary poses and synthesize talking videos with natural head movements. In addition, a Blend-Net and a contrastive sync loss are introduced to enhance the overall synthesis performance. Quantitative and qualitative results on three benchmarks demonstrate that our model can be trained by a video of just a few minutes in length and achieve state-of-the-art performance in both visual quality and speech-visual synchronization. Code: https://github.com/CVMI-Lab/Speech2Lip.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj81Kw0AUhWfjQqoP4Mr7AomT-R93UrQVAiItbsM1udMMtJkwBjFvb2xdHQ4f58DH2F3FS-W05g-Yf-J3KST3JVeuUtfsfTcStb2o4_gI23joixA7OsZphguBKcECYUMDZZxiGuBzhpowD3E4QMjpBAi7PuUJPpZpumFXAY9fdPufK7Z_ed6vt0X9tnldP9UFGqsKF4TQ3Ms2UOeQe6-sViaglpqc9b6rtPPGSiS1tLaVwnOp0QhbCW46L1fs_nJ7dmrGHE-Y5-bPrTm7yV93wUdr</recordid><startdate>20230909</startdate><enddate>20230909</enddate><creator>Wu, Xiuzhe</creator><creator>Hu, Pengfei</creator><creator>Wu, Yang</creator><creator>Lyu, Xiaoyang</creator><creator>Cao, Yan-Pei</creator><creator>Shan, Ying</creator><creator>Yang, Wenming</creator><creator>Sun, Zhongqian</creator><creator>Qi, Xiaojuan</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20230909</creationdate><title>Speech2Lip: High-fidelity Speech to Lip Generation by Learning from a Short Video</title><author>Wu, Xiuzhe ; Hu, Pengfei ; Wu, Yang ; Lyu, Xiaoyang ; Cao, Yan-Pei ; Shan, Ying ; Yang, Wenming ; Sun, Zhongqian ; Qi, Xiaojuan</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a674-8f225093cfed8a09947546fa535e8799d1589673ae4799cc329035a6271206d93</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Wu, Xiuzhe</creatorcontrib><creatorcontrib>Hu, Pengfei</creatorcontrib><creatorcontrib>Wu, Yang</creatorcontrib><creatorcontrib>Lyu, Xiaoyang</creatorcontrib><creatorcontrib>Cao, Yan-Pei</creatorcontrib><creatorcontrib>Shan, Ying</creatorcontrib><creatorcontrib>Yang, Wenming</creatorcontrib><creatorcontrib>Sun, Zhongqian</creatorcontrib><creatorcontrib>Qi, Xiaojuan</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Wu, Xiuzhe</au><au>Hu, Pengfei</au><au>Wu, Yang</au><au>Lyu, Xiaoyang</au><au>Cao, Yan-Pei</au><au>Shan, Ying</au><au>Yang, Wenming</au><au>Sun, Zhongqian</au><au>Qi, Xiaojuan</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Speech2Lip: High-fidelity Speech to Lip Generation by Learning from a Short Video</atitle><date>2023-09-09</date><risdate>2023</risdate><abstract>Synthesizing realistic videos according to a given speech is still an open challenge. Previous works have been plagued by issues such as inaccurate lip shape generation and poor image quality. The key reason is that only motions and appearances on limited facial areas (e.g., lip area) are mainly driven by the input speech. Therefore, directly learning a mapping function from speech to the entire head image is prone to ambiguity, particularly when using a short video for training. We thus propose a decomposition-synthesis-composition framework named Speech to Lip (Speech2Lip) that disentangles speech-sensitive and speech-insensitive motion/appearance to facilitate effective learning from limited training data, resulting in the generation of natural-looking videos. First, given a fixed head pose (i.e., canonical space), we present a speech-driven implicit model for lip image generation which concentrates on learning speech-sensitive motion and appearance. Next, to model the major speech-insensitive motion (i.e., head movement), we introduce a geometry-aware mutual explicit mapping (GAMEM) module that establishes geometric mappings between different head poses. This allows us to paste generated lip images at the canonical space onto head images with arbitrary poses and synthesize talking videos with natural head movements. In addition, a Blend-Net and a contrastive sync loss are introduced to enhance the overall synthesis performance. Quantitative and qualitative results on three benchmarks demonstrate that our model can be trained by a video of just a few minutes in length and achieve state-of-the-art performance in both visual quality and speech-visual synchronization. Code: https://github.com/CVMI-Lab/Speech2Lip.</abstract><doi>10.48550/arxiv.2309.04814</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2309.04814
ispartof
issn
language eng
recordid cdi_arxiv_primary_2309_04814
source arXiv.org
subjects Computer Science - Computer Vision and Pattern Recognition
title Speech2Lip: High-fidelity Speech to Lip Generation by Learning from a Short Video
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-31T23%3A38%3A27IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Speech2Lip:%20High-fidelity%20Speech%20to%20Lip%20Generation%20by%20Learning%20from%20a%20Short%20Video&rft.au=Wu,%20Xiuzhe&rft.date=2023-09-09&rft_id=info:doi/10.48550/arxiv.2309.04814&rft_dat=%3Carxiv_GOX%3E2309_04814%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true