AnimateMe: 4D Facial Expressions via Diffusion Models

The field of photorealistic 3D avatar reconstruction and generation has garnered significant attention in recent years; however, animating such avatars remains challenging. Recent advances in diffusion models have notably enhanced the capabilities of generative models in 2D animation. In this work,...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Gerogiannis, Dimitrios, Papantoniou, Foivos Paraperas, Potamias, Rolandos Alexandros, Lattas, Alexandros, Moschoglou, Stylianos, Ploumpis, Stylianos, Zafeiriou, Stefanos
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Gerogiannis, Dimitrios
Papantoniou, Foivos Paraperas
Potamias, Rolandos Alexandros
Lattas, Alexandros
Moschoglou, Stylianos
Ploumpis, Stylianos
Zafeiriou, Stefanos
description The field of photorealistic 3D avatar reconstruction and generation has garnered significant attention in recent years; however, animating such avatars remains challenging. Recent advances in diffusion models have notably enhanced the capabilities of generative models in 2D animation. In this work, we directly utilize these models within the 3D domain to achieve controllable and high-fidelity 4D facial animation. By integrating the strengths of diffusion processes and geometric deep learning, we employ Graph Neural Networks (GNNs) as denoising diffusion models in a novel approach, formulating the diffusion process directly on the mesh space and enabling the generation of 3D facial expressions. This facilitates the generation of facial deformations through a mesh-diffusion-based model. Additionally, to ensure temporal coherence in our animations, we propose a consistent noise sampling method. Under a series of both quantitative and qualitative experiments, we showcase that the proposed method outperforms prior work in 4D expression synthesis by generating high-fidelity extreme expressions. Furthermore, we applied our method to textured 4D facial expression generation, implementing a straightforward extension that involves training on a large-scale textured 4D facial expression database.
doi_str_mv 10.48550/arxiv.2403.17213
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2403_17213</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2403_17213</sourcerecordid><originalsourceid>FETCH-LOGICAL-a673-ed582a4e143c08ef658cb4ea80c1b9d024a2adb8ca55eafe9fa18fc93b768c6d3</originalsourceid><addsrcrecordid>eNotzrsKwjAYhuEsDqJegJO5gdYc29RNPIPi4l7-Jn8gUFtpVPTuPU4f7_LxEDLmLFVGazaF7hHuqVBMpjwXXPaJnjfhDFc84IyqJV2DDVDT1ePSYYyhbSK9B6DL4P3tk_TQOqzjkPQ81BFH_x2Q03p1WmyT_XGzW8z3CWS5TNBpI0AhV9Iygz7TxlYKwTDLq8IxoUCAq4wFrRE8Fh648baQVZ4Zmzk5IJPf7dddXro3tXuWH3_59csXWLRADg</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>AnimateMe: 4D Facial Expressions via Diffusion Models</title><source>arXiv.org</source><creator>Gerogiannis, Dimitrios ; Papantoniou, Foivos Paraperas ; Potamias, Rolandos Alexandros ; Lattas, Alexandros ; Moschoglou, Stylianos ; Ploumpis, Stylianos ; Zafeiriou, Stefanos</creator><creatorcontrib>Gerogiannis, Dimitrios ; Papantoniou, Foivos Paraperas ; Potamias, Rolandos Alexandros ; Lattas, Alexandros ; Moschoglou, Stylianos ; Ploumpis, Stylianos ; Zafeiriou, Stefanos</creatorcontrib><description>The field of photorealistic 3D avatar reconstruction and generation has garnered significant attention in recent years; however, animating such avatars remains challenging. Recent advances in diffusion models have notably enhanced the capabilities of generative models in 2D animation. In this work, we directly utilize these models within the 3D domain to achieve controllable and high-fidelity 4D facial animation. By integrating the strengths of diffusion processes and geometric deep learning, we employ Graph Neural Networks (GNNs) as denoising diffusion models in a novel approach, formulating the diffusion process directly on the mesh space and enabling the generation of 3D facial expressions. This facilitates the generation of facial deformations through a mesh-diffusion-based model. Additionally, to ensure temporal coherence in our animations, we propose a consistent noise sampling method. Under a series of both quantitative and qualitative experiments, we showcase that the proposed method outperforms prior work in 4D expression synthesis by generating high-fidelity extreme expressions. Furthermore, we applied our method to textured 4D facial expression generation, implementing a straightforward extension that involves training on a large-scale textured 4D facial expression database.</description><identifier>DOI: 10.48550/arxiv.2403.17213</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition</subject><creationdate>2024-03</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2403.17213$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2403.17213$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Gerogiannis, Dimitrios</creatorcontrib><creatorcontrib>Papantoniou, Foivos Paraperas</creatorcontrib><creatorcontrib>Potamias, Rolandos Alexandros</creatorcontrib><creatorcontrib>Lattas, Alexandros</creatorcontrib><creatorcontrib>Moschoglou, Stylianos</creatorcontrib><creatorcontrib>Ploumpis, Stylianos</creatorcontrib><creatorcontrib>Zafeiriou, Stefanos</creatorcontrib><title>AnimateMe: 4D Facial Expressions via Diffusion Models</title><description>The field of photorealistic 3D avatar reconstruction and generation has garnered significant attention in recent years; however, animating such avatars remains challenging. Recent advances in diffusion models have notably enhanced the capabilities of generative models in 2D animation. In this work, we directly utilize these models within the 3D domain to achieve controllable and high-fidelity 4D facial animation. By integrating the strengths of diffusion processes and geometric deep learning, we employ Graph Neural Networks (GNNs) as denoising diffusion models in a novel approach, formulating the diffusion process directly on the mesh space and enabling the generation of 3D facial expressions. This facilitates the generation of facial deformations through a mesh-diffusion-based model. Additionally, to ensure temporal coherence in our animations, we propose a consistent noise sampling method. Under a series of both quantitative and qualitative experiments, we showcase that the proposed method outperforms prior work in 4D expression synthesis by generating high-fidelity extreme expressions. Furthermore, we applied our method to textured 4D facial expression generation, implementing a straightforward extension that involves training on a large-scale textured 4D facial expression database.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotzrsKwjAYhuEsDqJegJO5gdYc29RNPIPi4l7-Jn8gUFtpVPTuPU4f7_LxEDLmLFVGazaF7hHuqVBMpjwXXPaJnjfhDFc84IyqJV2DDVDT1ePSYYyhbSK9B6DL4P3tk_TQOqzjkPQ81BFH_x2Q03p1WmyT_XGzW8z3CWS5TNBpI0AhV9Iygz7TxlYKwTDLq8IxoUCAq4wFrRE8Fh648baQVZ4Zmzk5IJPf7dddXro3tXuWH3_59csXWLRADg</recordid><startdate>20240325</startdate><enddate>20240325</enddate><creator>Gerogiannis, Dimitrios</creator><creator>Papantoniou, Foivos Paraperas</creator><creator>Potamias, Rolandos Alexandros</creator><creator>Lattas, Alexandros</creator><creator>Moschoglou, Stylianos</creator><creator>Ploumpis, Stylianos</creator><creator>Zafeiriou, Stefanos</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240325</creationdate><title>AnimateMe: 4D Facial Expressions via Diffusion Models</title><author>Gerogiannis, Dimitrios ; Papantoniou, Foivos Paraperas ; Potamias, Rolandos Alexandros ; Lattas, Alexandros ; Moschoglou, Stylianos ; Ploumpis, Stylianos ; Zafeiriou, Stefanos</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a673-ed582a4e143c08ef658cb4ea80c1b9d024a2adb8ca55eafe9fa18fc93b768c6d3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Gerogiannis, Dimitrios</creatorcontrib><creatorcontrib>Papantoniou, Foivos Paraperas</creatorcontrib><creatorcontrib>Potamias, Rolandos Alexandros</creatorcontrib><creatorcontrib>Lattas, Alexandros</creatorcontrib><creatorcontrib>Moschoglou, Stylianos</creatorcontrib><creatorcontrib>Ploumpis, Stylianos</creatorcontrib><creatorcontrib>Zafeiriou, Stefanos</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Gerogiannis, Dimitrios</au><au>Papantoniou, Foivos Paraperas</au><au>Potamias, Rolandos Alexandros</au><au>Lattas, Alexandros</au><au>Moschoglou, Stylianos</au><au>Ploumpis, Stylianos</au><au>Zafeiriou, Stefanos</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>AnimateMe: 4D Facial Expressions via Diffusion Models</atitle><date>2024-03-25</date><risdate>2024</risdate><abstract>The field of photorealistic 3D avatar reconstruction and generation has garnered significant attention in recent years; however, animating such avatars remains challenging. Recent advances in diffusion models have notably enhanced the capabilities of generative models in 2D animation. In this work, we directly utilize these models within the 3D domain to achieve controllable and high-fidelity 4D facial animation. By integrating the strengths of diffusion processes and geometric deep learning, we employ Graph Neural Networks (GNNs) as denoising diffusion models in a novel approach, formulating the diffusion process directly on the mesh space and enabling the generation of 3D facial expressions. This facilitates the generation of facial deformations through a mesh-diffusion-based model. Additionally, to ensure temporal coherence in our animations, we propose a consistent noise sampling method. Under a series of both quantitative and qualitative experiments, we showcase that the proposed method outperforms prior work in 4D expression synthesis by generating high-fidelity extreme expressions. Furthermore, we applied our method to textured 4D facial expression generation, implementing a straightforward extension that involves training on a large-scale textured 4D facial expression database.</abstract><doi>10.48550/arxiv.2403.17213</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2403.17213
ispartof
issn
language eng
recordid cdi_arxiv_primary_2403_17213
source arXiv.org
subjects Computer Science - Computer Vision and Pattern Recognition
title AnimateMe: 4D Facial Expressions via Diffusion Models
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-24T14%3A00%3A29IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=AnimateMe:%204D%20Facial%20Expressions%20via%20Diffusion%20Models&rft.au=Gerogiannis,%20Dimitrios&rft.date=2024-03-25&rft_id=info:doi/10.48550/arxiv.2403.17213&rft_dat=%3Carxiv_GOX%3E2403_17213%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true