4D Facial Expression Diffusion Model
Facial expression generation is one of the most challenging and long-sought aspects of character animation, with many interesting applications. The challenging task, traditionally having relied heavily on digital craftspersons, remains yet to be explored. In this paper, we introduce a generative fra...
Gespeichert in:
Veröffentlicht in: | ACM transactions on multimedia computing communications and applications 2024-03 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | ACM transactions on multimedia computing communications and applications |
container_volume | |
creator | Zou, Kaifeng Faisan, Sylvain Yu, Boyang Valette, Sébastien Seo, Hyewon |
description | Facial expression generation is one of the most challenging and long-sought aspects of character animation, with many interesting applications. The challenging task, traditionally having relied heavily on digital craftspersons, remains yet to be explored. In this paper, we introduce a generative framework for generating 3D facial expression sequences (i.e. 4D faces) that can be conditioned on different inputs to animate an arbitrary 3D face mesh. It is composed of two tasks: (1) Learning the generative model that is trained over a set of 3D landmark sequences, and (2) Generating 3D mesh sequences of an input facial mesh driven by the generated landmark sequences. The generative model is based on a Denoising Diffusion Probabilistic Model (DDPM), which has achieved remarkable success in generative tasks of other domains. While it can be trained unconditionally, its reverse process can still be conditioned by various condition signals. This allows us to efficiently develop several downstream tasks involving various conditional generation, by using expression labels, text, partial sequences, or simply a facial geometry. To obtain the full mesh deformation, we then develop a landmark-guided encoder-decoder to apply the geometrical deformation embedded in landmarks on a given facial mesh. Experiments show that our model has learned to generate realistic, quality expressions solely from the dataset of relatively small size, improving over the state-of-the-art methods. Videos and qualitative comparisons with other methods can be found at https://github.com/ZOUKaifeng/4DFM. Code and models will be made available upon acceptance. |
doi_str_mv | 10.1145/3653455 |
format | Article |
fullrecord | <record><control><sourceid>hal_cross</sourceid><recordid>TN_cdi_hal_primary_oai_HAL_hal_04301288v1</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>oai_HAL_hal_04301288v1</sourcerecordid><originalsourceid>FETCH-LOGICAL-a1565-26b2946c00dc2febe0b2b0e929cf0110228c1eea7656e511b7e75a70add981a03</originalsourceid><addsrcrecordid>eNo90D1Pw0AMBuATAolSEDtTBiTEkGJfzpdkrPpBkVKxwHxyLnciKCVVTiD496SkZPIr-7EHC3GNMENU9JBoShTRiZggEcY603Q6ZkrPxUUI7wA9U3oibtUyWrOtuYlW3_vOhVC3H9Gy9v7zL23byjWX4sxzE9zVsU7F63r1stjExfPj02JexIykKZa6lLnSFqCy0rvSQSlLcLnMrQdEkDKz6BynmrQjxDJ1KXEKXFV5hgzJVNwPd9-4Mfuu3nH3Y1quzWZemEMPVAIos-wLe3s3WNu1IXTOjwsI5vAIc3xEL28GyXY3ov_hL5hLVJ8</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>4D Facial Expression Diffusion Model</title><source>ACM Digital Library Complete</source><creator>Zou, Kaifeng ; Faisan, Sylvain ; Yu, Boyang ; Valette, Sébastien ; Seo, Hyewon</creator><creatorcontrib>Zou, Kaifeng ; Faisan, Sylvain ; Yu, Boyang ; Valette, Sébastien ; Seo, Hyewon</creatorcontrib><description>Facial expression generation is one of the most challenging and long-sought aspects of character animation, with many interesting applications. The challenging task, traditionally having relied heavily on digital craftspersons, remains yet to be explored. In this paper, we introduce a generative framework for generating 3D facial expression sequences (i.e. 4D faces) that can be conditioned on different inputs to animate an arbitrary 3D face mesh. It is composed of two tasks: (1) Learning the generative model that is trained over a set of 3D landmark sequences, and (2) Generating 3D mesh sequences of an input facial mesh driven by the generated landmark sequences. The generative model is based on a Denoising Diffusion Probabilistic Model (DDPM), which has achieved remarkable success in generative tasks of other domains. While it can be trained unconditionally, its reverse process can still be conditioned by various condition signals. This allows us to efficiently develop several downstream tasks involving various conditional generation, by using expression labels, text, partial sequences, or simply a facial geometry. To obtain the full mesh deformation, we then develop a landmark-guided encoder-decoder to apply the geometrical deformation embedded in landmarks on a given facial mesh. Experiments show that our model has learned to generate realistic, quality expressions solely from the dataset of relatively small size, improving over the state-of-the-art methods. Videos and qualitative comparisons with other methods can be found at https://github.com/ZOUKaifeng/4DFM. Code and models will be made available upon acceptance.</description><identifier>ISSN: 1551-6857</identifier><identifier>EISSN: 1551-6865</identifier><identifier>DOI: 10.1145/3653455</identifier><language>eng</language><publisher>New York, NY: ACM</publisher><subject>Computer Science ; Computer systems organization ; Embedded systems ; Modeling and Simulation ; Network reliability ; Networks ; Redundancy ; Robotics</subject><ispartof>ACM transactions on multimedia computing communications and applications, 2024-03</ispartof><rights>Copyright held by the owner/author(s). Publication rights licensed to ACM.</rights><rights>Distributed under a Creative Commons Attribution 4.0 International License</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-a1565-26b2946c00dc2febe0b2b0e929cf0110228c1eea7656e511b7e75a70add981a03</citedby><cites>FETCH-LOGICAL-a1565-26b2946c00dc2febe0b2b0e929cf0110228c1eea7656e511b7e75a70add981a03</cites><orcidid>0000-0001-7549-4808 ; 0000-0001-8851-0256</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>230,314,780,784,885,27924,27925</link.rule.ids><backlink>$$Uhttps://hal.science/hal-04301288$$DView record in HAL$$Hfree_for_read</backlink></links><search><creatorcontrib>Zou, Kaifeng</creatorcontrib><creatorcontrib>Faisan, Sylvain</creatorcontrib><creatorcontrib>Yu, Boyang</creatorcontrib><creatorcontrib>Valette, Sébastien</creatorcontrib><creatorcontrib>Seo, Hyewon</creatorcontrib><title>4D Facial Expression Diffusion Model</title><title>ACM transactions on multimedia computing communications and applications</title><addtitle>ACM TOMM</addtitle><description>Facial expression generation is one of the most challenging and long-sought aspects of character animation, with many interesting applications. The challenging task, traditionally having relied heavily on digital craftspersons, remains yet to be explored. In this paper, we introduce a generative framework for generating 3D facial expression sequences (i.e. 4D faces) that can be conditioned on different inputs to animate an arbitrary 3D face mesh. It is composed of two tasks: (1) Learning the generative model that is trained over a set of 3D landmark sequences, and (2) Generating 3D mesh sequences of an input facial mesh driven by the generated landmark sequences. The generative model is based on a Denoising Diffusion Probabilistic Model (DDPM), which has achieved remarkable success in generative tasks of other domains. While it can be trained unconditionally, its reverse process can still be conditioned by various condition signals. This allows us to efficiently develop several downstream tasks involving various conditional generation, by using expression labels, text, partial sequences, or simply a facial geometry. To obtain the full mesh deformation, we then develop a landmark-guided encoder-decoder to apply the geometrical deformation embedded in landmarks on a given facial mesh. Experiments show that our model has learned to generate realistic, quality expressions solely from the dataset of relatively small size, improving over the state-of-the-art methods. Videos and qualitative comparisons with other methods can be found at https://github.com/ZOUKaifeng/4DFM. Code and models will be made available upon acceptance.</description><subject>Computer Science</subject><subject>Computer systems organization</subject><subject>Embedded systems</subject><subject>Modeling and Simulation</subject><subject>Network reliability</subject><subject>Networks</subject><subject>Redundancy</subject><subject>Robotics</subject><issn>1551-6857</issn><issn>1551-6865</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><recordid>eNo90D1Pw0AMBuATAolSEDtTBiTEkGJfzpdkrPpBkVKxwHxyLnciKCVVTiD496SkZPIr-7EHC3GNMENU9JBoShTRiZggEcY603Q6ZkrPxUUI7wA9U3oibtUyWrOtuYlW3_vOhVC3H9Gy9v7zL23byjWX4sxzE9zVsU7F63r1stjExfPj02JexIykKZa6lLnSFqCy0rvSQSlLcLnMrQdEkDKz6BynmrQjxDJ1KXEKXFV5hgzJVNwPd9-4Mfuu3nH3Y1quzWZemEMPVAIos-wLe3s3WNu1IXTOjwsI5vAIc3xEL28GyXY3ov_hL5hLVJ8</recordid><startdate>20240328</startdate><enddate>20240328</enddate><creator>Zou, Kaifeng</creator><creator>Faisan, Sylvain</creator><creator>Yu, Boyang</creator><creator>Valette, Sébastien</creator><creator>Seo, Hyewon</creator><general>ACM</general><general>Association for Computing Machinery</general><scope>AAYXX</scope><scope>CITATION</scope><scope>1XC</scope><scope>VOOES</scope><orcidid>https://orcid.org/0000-0001-7549-4808</orcidid><orcidid>https://orcid.org/0000-0001-8851-0256</orcidid></search><sort><creationdate>20240328</creationdate><title>4D Facial Expression Diffusion Model</title><author>Zou, Kaifeng ; Faisan, Sylvain ; Yu, Boyang ; Valette, Sébastien ; Seo, Hyewon</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a1565-26b2946c00dc2febe0b2b0e929cf0110228c1eea7656e511b7e75a70add981a03</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science</topic><topic>Computer systems organization</topic><topic>Embedded systems</topic><topic>Modeling and Simulation</topic><topic>Network reliability</topic><topic>Networks</topic><topic>Redundancy</topic><topic>Robotics</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Zou, Kaifeng</creatorcontrib><creatorcontrib>Faisan, Sylvain</creatorcontrib><creatorcontrib>Yu, Boyang</creatorcontrib><creatorcontrib>Valette, Sébastien</creatorcontrib><creatorcontrib>Seo, Hyewon</creatorcontrib><collection>CrossRef</collection><collection>Hyper Article en Ligne (HAL)</collection><collection>Hyper Article en Ligne (HAL) (Open Access)</collection><jtitle>ACM transactions on multimedia computing communications and applications</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Zou, Kaifeng</au><au>Faisan, Sylvain</au><au>Yu, Boyang</au><au>Valette, Sébastien</au><au>Seo, Hyewon</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>4D Facial Expression Diffusion Model</atitle><jtitle>ACM transactions on multimedia computing communications and applications</jtitle><stitle>ACM TOMM</stitle><date>2024-03-28</date><risdate>2024</risdate><issn>1551-6857</issn><eissn>1551-6865</eissn><abstract>Facial expression generation is one of the most challenging and long-sought aspects of character animation, with many interesting applications. The challenging task, traditionally having relied heavily on digital craftspersons, remains yet to be explored. In this paper, we introduce a generative framework for generating 3D facial expression sequences (i.e. 4D faces) that can be conditioned on different inputs to animate an arbitrary 3D face mesh. It is composed of two tasks: (1) Learning the generative model that is trained over a set of 3D landmark sequences, and (2) Generating 3D mesh sequences of an input facial mesh driven by the generated landmark sequences. The generative model is based on a Denoising Diffusion Probabilistic Model (DDPM), which has achieved remarkable success in generative tasks of other domains. While it can be trained unconditionally, its reverse process can still be conditioned by various condition signals. This allows us to efficiently develop several downstream tasks involving various conditional generation, by using expression labels, text, partial sequences, or simply a facial geometry. To obtain the full mesh deformation, we then develop a landmark-guided encoder-decoder to apply the geometrical deformation embedded in landmarks on a given facial mesh. Experiments show that our model has learned to generate realistic, quality expressions solely from the dataset of relatively small size, improving over the state-of-the-art methods. Videos and qualitative comparisons with other methods can be found at https://github.com/ZOUKaifeng/4DFM. Code and models will be made available upon acceptance.</abstract><cop>New York, NY</cop><pub>ACM</pub><doi>10.1145/3653455</doi><orcidid>https://orcid.org/0000-0001-7549-4808</orcidid><orcidid>https://orcid.org/0000-0001-8851-0256</orcidid><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 1551-6857 |
ispartof | ACM transactions on multimedia computing communications and applications, 2024-03 |
issn | 1551-6857 1551-6865 |
language | eng |
recordid | cdi_hal_primary_oai_HAL_hal_04301288v1 |
source | ACM Digital Library Complete |
subjects | Computer Science Computer systems organization Embedded systems Modeling and Simulation Network reliability Networks Redundancy Robotics |
title | 4D Facial Expression Diffusion Model |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-05T22%3A43%3A58IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-hal_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=4D%20Facial%20Expression%20Diffusion%20Model&rft.jtitle=ACM%20transactions%20on%20multimedia%20computing%20communications%20and%20applications&rft.au=Zou,%20Kaifeng&rft.date=2024-03-28&rft.issn=1551-6857&rft.eissn=1551-6865&rft_id=info:doi/10.1145/3653455&rft_dat=%3Chal_cross%3Eoai_HAL_hal_04301288v1%3C/hal_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |