DiM-Gestor: Co-Speech Gesture Generation with Adaptive Layer Normalization Mamba-2

Speech-driven gesture generation using transformer-based generative models represents a rapidly advancing area within virtual human creation. However, existing models face significant challenges due to their quadratic time and space complexities, limiting scalability and efficiency. To address these...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Zhang, Fan, Zhao, Siyuan, Ji, Naye, Wang, Zhaohan, Wu, Jingmei, Gao, Fuxing, Ye, Zhenqing, Yan, Leyao, Dai, Lanxin, Geng, Weidong, Lyu, Xin, Zhao, Bozuo, Yu, Dingguo, Du, Hui, Hu, Bin
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Zhang, Fan
Zhao, Siyuan
Ji, Naye
Wang, Zhaohan
Wu, Jingmei
Gao, Fuxing
Ye, Zhenqing
Yan, Leyao
Dai, Lanxin
Geng, Weidong
Lyu, Xin
Zhao, Bozuo
Yu, Dingguo
Du, Hui
Hu, Bin
description Speech-driven gesture generation using transformer-based generative models represents a rapidly advancing area within virtual human creation. However, existing models face significant challenges due to their quadratic time and space complexities, limiting scalability and efficiency. To address these limitations, we introduce DiM-Gestor, an innovative end-to-end generative model leveraging the Mamba-2 architecture. DiM-Gestor features a dual-component framework: (1) a fuzzy feature extractor and (2) a speech-to-gesture mapping module, both built on the Mamba-2. The fuzzy feature extractor, integrated with a Chinese Pre-trained Model and Mamba-2, autonomously extracts implicit, continuous speech features. These features are synthesized into a unified latent representation and then processed by the speech-to-gesture mapping module. This module employs an Adaptive Layer Normalization (AdaLN)-enhanced Mamba-2 mechanism to uniformly apply transformations across all sequence tokens. This enables precise modeling of the nuanced interplay between speech features and gesture dynamics. We utilize a diffusion model to train and infer diverse gesture outputs. Extensive subjective and objective evaluations conducted on the newly released Chinese Co-Speech Gestures dataset corroborate the efficacy of our proposed model. Compared with Transformer-based architecture, the assessments reveal that our approach delivers competitive results and significantly reduces memory usage, approximately 2.4 times, and enhances inference speeds by 2 to 4 times. Additionally, we released the CCG dataset, a Chinese Co-Speech Gestures dataset, comprising 15.97 hours (six styles across five scenarios) of 3D full-body skeleton gesture motion performed by professional Chinese TV broadcasters.
doi_str_mv 10.48550/arxiv.2411.16729
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2411_16729</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2411_16729</sourcerecordid><originalsourceid>FETCH-arxiv_primary_2411_167293</originalsourceid><addsrcrecordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMjE01DM0Mzey5GQIcsn01XVPLS7JL7JScM7XDS5ITU3OUACJlBalAum81KLEksz8PIXyzJIMBceUxIKSzLJUBZ_EytQiBb_8otzEnMwqiArfxNykRF0jHgbWtMSc4lReKM3NIO_mGuLsoQu2Pb6gKDM3sagyHuSKeLArjAmrAAAC3jxr</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>DiM-Gestor: Co-Speech Gesture Generation with Adaptive Layer Normalization Mamba-2</title><source>arXiv.org</source><creator>Zhang, Fan ; Zhao, Siyuan ; Ji, Naye ; Wang, Zhaohan ; Wu, Jingmei ; Gao, Fuxing ; Ye, Zhenqing ; Yan, Leyao ; Dai, Lanxin ; Geng, Weidong ; Lyu, Xin ; Zhao, Bozuo ; Yu, Dingguo ; Du, Hui ; Hu, Bin</creator><creatorcontrib>Zhang, Fan ; Zhao, Siyuan ; Ji, Naye ; Wang, Zhaohan ; Wu, Jingmei ; Gao, Fuxing ; Ye, Zhenqing ; Yan, Leyao ; Dai, Lanxin ; Geng, Weidong ; Lyu, Xin ; Zhao, Bozuo ; Yu, Dingguo ; Du, Hui ; Hu, Bin</creatorcontrib><description>Speech-driven gesture generation using transformer-based generative models represents a rapidly advancing area within virtual human creation. However, existing models face significant challenges due to their quadratic time and space complexities, limiting scalability and efficiency. To address these limitations, we introduce DiM-Gestor, an innovative end-to-end generative model leveraging the Mamba-2 architecture. DiM-Gestor features a dual-component framework: (1) a fuzzy feature extractor and (2) a speech-to-gesture mapping module, both built on the Mamba-2. The fuzzy feature extractor, integrated with a Chinese Pre-trained Model and Mamba-2, autonomously extracts implicit, continuous speech features. These features are synthesized into a unified latent representation and then processed by the speech-to-gesture mapping module. This module employs an Adaptive Layer Normalization (AdaLN)-enhanced Mamba-2 mechanism to uniformly apply transformations across all sequence tokens. This enables precise modeling of the nuanced interplay between speech features and gesture dynamics. We utilize a diffusion model to train and infer diverse gesture outputs. Extensive subjective and objective evaluations conducted on the newly released Chinese Co-Speech Gestures dataset corroborate the efficacy of our proposed model. Compared with Transformer-based architecture, the assessments reveal that our approach delivers competitive results and significantly reduces memory usage, approximately 2.4 times, and enhances inference speeds by 2 to 4 times. Additionally, we released the CCG dataset, a Chinese Co-Speech Gestures dataset, comprising 15.97 hours (six styles across five scenarios) of 3D full-body skeleton gesture motion performed by professional Chinese TV broadcasters.</description><identifier>DOI: 10.48550/arxiv.2411.16729</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Graphics ; Computer Science - Human-Computer Interaction ; Computer Science - Multimedia ; Computer Science - Sound</subject><creationdate>2024-11</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2411.16729$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2411.16729$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Zhang, Fan</creatorcontrib><creatorcontrib>Zhao, Siyuan</creatorcontrib><creatorcontrib>Ji, Naye</creatorcontrib><creatorcontrib>Wang, Zhaohan</creatorcontrib><creatorcontrib>Wu, Jingmei</creatorcontrib><creatorcontrib>Gao, Fuxing</creatorcontrib><creatorcontrib>Ye, Zhenqing</creatorcontrib><creatorcontrib>Yan, Leyao</creatorcontrib><creatorcontrib>Dai, Lanxin</creatorcontrib><creatorcontrib>Geng, Weidong</creatorcontrib><creatorcontrib>Lyu, Xin</creatorcontrib><creatorcontrib>Zhao, Bozuo</creatorcontrib><creatorcontrib>Yu, Dingguo</creatorcontrib><creatorcontrib>Du, Hui</creatorcontrib><creatorcontrib>Hu, Bin</creatorcontrib><title>DiM-Gestor: Co-Speech Gesture Generation with Adaptive Layer Normalization Mamba-2</title><description>Speech-driven gesture generation using transformer-based generative models represents a rapidly advancing area within virtual human creation. However, existing models face significant challenges due to their quadratic time and space complexities, limiting scalability and efficiency. To address these limitations, we introduce DiM-Gestor, an innovative end-to-end generative model leveraging the Mamba-2 architecture. DiM-Gestor features a dual-component framework: (1) a fuzzy feature extractor and (2) a speech-to-gesture mapping module, both built on the Mamba-2. The fuzzy feature extractor, integrated with a Chinese Pre-trained Model and Mamba-2, autonomously extracts implicit, continuous speech features. These features are synthesized into a unified latent representation and then processed by the speech-to-gesture mapping module. This module employs an Adaptive Layer Normalization (AdaLN)-enhanced Mamba-2 mechanism to uniformly apply transformations across all sequence tokens. This enables precise modeling of the nuanced interplay between speech features and gesture dynamics. We utilize a diffusion model to train and infer diverse gesture outputs. Extensive subjective and objective evaluations conducted on the newly released Chinese Co-Speech Gestures dataset corroborate the efficacy of our proposed model. Compared with Transformer-based architecture, the assessments reveal that our approach delivers competitive results and significantly reduces memory usage, approximately 2.4 times, and enhances inference speeds by 2 to 4 times. Additionally, we released the CCG dataset, a Chinese Co-Speech Gestures dataset, comprising 15.97 hours (six styles across five scenarios) of 3D full-body skeleton gesture motion performed by professional Chinese TV broadcasters.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Graphics</subject><subject>Computer Science - Human-Computer Interaction</subject><subject>Computer Science - Multimedia</subject><subject>Computer Science - Sound</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMjE01DM0Mzey5GQIcsn01XVPLS7JL7JScM7XDS5ITU3OUACJlBalAum81KLEksz8PIXyzJIMBceUxIKSzLJUBZ_EytQiBb_8otzEnMwqiArfxNykRF0jHgbWtMSc4lReKM3NIO_mGuLsoQu2Pb6gKDM3sagyHuSKeLArjAmrAAAC3jxr</recordid><startdate>20241123</startdate><enddate>20241123</enddate><creator>Zhang, Fan</creator><creator>Zhao, Siyuan</creator><creator>Ji, Naye</creator><creator>Wang, Zhaohan</creator><creator>Wu, Jingmei</creator><creator>Gao, Fuxing</creator><creator>Ye, Zhenqing</creator><creator>Yan, Leyao</creator><creator>Dai, Lanxin</creator><creator>Geng, Weidong</creator><creator>Lyu, Xin</creator><creator>Zhao, Bozuo</creator><creator>Yu, Dingguo</creator><creator>Du, Hui</creator><creator>Hu, Bin</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20241123</creationdate><title>DiM-Gestor: Co-Speech Gesture Generation with Adaptive Layer Normalization Mamba-2</title><author>Zhang, Fan ; Zhao, Siyuan ; Ji, Naye ; Wang, Zhaohan ; Wu, Jingmei ; Gao, Fuxing ; Ye, Zhenqing ; Yan, Leyao ; Dai, Lanxin ; Geng, Weidong ; Lyu, Xin ; Zhao, Bozuo ; Yu, Dingguo ; Du, Hui ; Hu, Bin</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-arxiv_primary_2411_167293</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Graphics</topic><topic>Computer Science - Human-Computer Interaction</topic><topic>Computer Science - Multimedia</topic><topic>Computer Science - Sound</topic><toplevel>online_resources</toplevel><creatorcontrib>Zhang, Fan</creatorcontrib><creatorcontrib>Zhao, Siyuan</creatorcontrib><creatorcontrib>Ji, Naye</creatorcontrib><creatorcontrib>Wang, Zhaohan</creatorcontrib><creatorcontrib>Wu, Jingmei</creatorcontrib><creatorcontrib>Gao, Fuxing</creatorcontrib><creatorcontrib>Ye, Zhenqing</creatorcontrib><creatorcontrib>Yan, Leyao</creatorcontrib><creatorcontrib>Dai, Lanxin</creatorcontrib><creatorcontrib>Geng, Weidong</creatorcontrib><creatorcontrib>Lyu, Xin</creatorcontrib><creatorcontrib>Zhao, Bozuo</creatorcontrib><creatorcontrib>Yu, Dingguo</creatorcontrib><creatorcontrib>Du, Hui</creatorcontrib><creatorcontrib>Hu, Bin</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Zhang, Fan</au><au>Zhao, Siyuan</au><au>Ji, Naye</au><au>Wang, Zhaohan</au><au>Wu, Jingmei</au><au>Gao, Fuxing</au><au>Ye, Zhenqing</au><au>Yan, Leyao</au><au>Dai, Lanxin</au><au>Geng, Weidong</au><au>Lyu, Xin</au><au>Zhao, Bozuo</au><au>Yu, Dingguo</au><au>Du, Hui</au><au>Hu, Bin</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>DiM-Gestor: Co-Speech Gesture Generation with Adaptive Layer Normalization Mamba-2</atitle><date>2024-11-23</date><risdate>2024</risdate><abstract>Speech-driven gesture generation using transformer-based generative models represents a rapidly advancing area within virtual human creation. However, existing models face significant challenges due to their quadratic time and space complexities, limiting scalability and efficiency. To address these limitations, we introduce DiM-Gestor, an innovative end-to-end generative model leveraging the Mamba-2 architecture. DiM-Gestor features a dual-component framework: (1) a fuzzy feature extractor and (2) a speech-to-gesture mapping module, both built on the Mamba-2. The fuzzy feature extractor, integrated with a Chinese Pre-trained Model and Mamba-2, autonomously extracts implicit, continuous speech features. These features are synthesized into a unified latent representation and then processed by the speech-to-gesture mapping module. This module employs an Adaptive Layer Normalization (AdaLN)-enhanced Mamba-2 mechanism to uniformly apply transformations across all sequence tokens. This enables precise modeling of the nuanced interplay between speech features and gesture dynamics. We utilize a diffusion model to train and infer diverse gesture outputs. Extensive subjective and objective evaluations conducted on the newly released Chinese Co-Speech Gestures dataset corroborate the efficacy of our proposed model. Compared with Transformer-based architecture, the assessments reveal that our approach delivers competitive results and significantly reduces memory usage, approximately 2.4 times, and enhances inference speeds by 2 to 4 times. Additionally, we released the CCG dataset, a Chinese Co-Speech Gestures dataset, comprising 15.97 hours (six styles across five scenarios) of 3D full-body skeleton gesture motion performed by professional Chinese TV broadcasters.</abstract><doi>10.48550/arxiv.2411.16729</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2411.16729
ispartof
issn
language eng
recordid cdi_arxiv_primary_2411_16729
source arXiv.org
subjects Computer Science - Artificial Intelligence
Computer Science - Graphics
Computer Science - Human-Computer Interaction
Computer Science - Multimedia
Computer Science - Sound
title DiM-Gestor: Co-Speech Gesture Generation with Adaptive Layer Normalization Mamba-2
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-05T02%3A15%3A14IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=DiM-Gestor:%20Co-Speech%20Gesture%20Generation%20with%20Adaptive%20Layer%20Normalization%20Mamba-2&rft.au=Zhang,%20Fan&rft.date=2024-11-23&rft_id=info:doi/10.48550/arxiv.2411.16729&rft_dat=%3Carxiv_GOX%3E2411_16729%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true