Learning to dance: A graph convolutional adversarial network to generate realistic dance motions from audio

[Display omitted] •A methodology for conditioned human motion generation.•A novel multimodal dataset that will be available for the community.•Our method shows that the use of GCN provides a more coherent approach for the data structure inherent to the problem.•Our approach generates realistic sampl...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Computers & graphics 2021-02, Vol.94, p.11-21
Hauptverfasser: Ferreira, João P., Coutinho, Thiago M., Gomes, Thiago L., Neto, José F., Azevedo, Rafael, Martins, Renato, Nascimento, Erickson R.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 21
container_issue
container_start_page 11
container_title Computers & graphics
container_volume 94
creator Ferreira, João P.
Coutinho, Thiago M.
Gomes, Thiago L.
Neto, José F.
Azevedo, Rafael
Martins, Renato
Nascimento, Erickson R.
description [Display omitted] •A methodology for conditioned human motion generation.•A novel multimodal dataset that will be available for the community.•Our method shows that the use of GCN provides a more coherent approach for the data structure inherent to the problem.•Our approach generates realistic samples, achieving similar benchmarks to real motions on a perceptual study. Synthesizing human motion through learning techniques is becoming an increasingly popular approach to alleviating the requirement of new data capture to produce animations. Learning to move naturally from music, i.e., to dance, is one of the more complex motions humans often perform effortlessly. Each dance movement is unique, yet such movements maintain the core characteristics of the dance style. Most approaches addressing this problem with classical convolutional and recursive neural models undergo training and variability issues due to the non-Euclidean geometry of the motion manifold structure. In this paper, we design a novel method based on graph convolutional networks to tackle the problem of automatic dance generation from audio information. Our method uses an adversarial learning scheme conditioned on the input music audios to create natural motions preserving the key movements of different music styles. We evaluate our method with three quantitative metrics of generative methods and a user study. The results suggest that the proposed GCN model outperforms the state-of-the-art dance generation method conditioned on music in different experiments. Moreover, our graph-convolutional approach is simpler, easier to be trained, and capable of generating more realistic motion styles regarding qualitative and different quantitative metrics. It also presented a visual movement perceptual quality comparable to real motion data. The dataset and project are publicly available at: https://www.verlab.dcc.ufmg.br/motion-analysis/cag2020.
doi_str_mv 10.1016/j.cag.2020.09.009
format Article
fullrecord <record><control><sourceid>proquest_hal_p</sourceid><recordid>TN_cdi_hal_primary_oai_HAL_hal_03257495v1</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><els_id>S0097849320301436</els_id><sourcerecordid>2505725266</sourcerecordid><originalsourceid>FETCH-LOGICAL-c359t-5e1385cdfe99fe96de78080dde0d63e3cec08c2335c5bf81e1abbfcbe9f1b6623</originalsourceid><addsrcrecordid>eNp9kMFq3DAQhkVpodu0D9CboKcc7IyklWwlpyWkTWAhl_YsZGm80cZrbSTvhr59ZVxyzEGMGP7vh_kI-c6gZsDU1b52dldz4FCDrgH0B7JibSOqRrXrj2RVNk3VrrX4TL7kvAcAztV6RZ63aNMYxh2dIvV2dHhNN3SX7PGJujie43CaQhztQK0_Y8o2hfIfcXqN6XlmdjhishPShHYIeQpuqaGHOIOZ9ikeqD35EL-ST70dMn77Py_In593v2_vq-3jr4fbzbZyQuqpkshEK53vUevylMemhRa8R_BKoHDooHVcCOlk17cMme263nWoe9YpxcUFuVx6n-xgjikcbPprog3mfrM18w4El81ayzMr2R9L9pjiywnzZPbxlMq92XAJsuGSK1VSbEm5FHNO2L_VMjCzf7M3xb-Z_RvQptguzM3CYDn1HDCZ7AIWMz4kdJPxMbxD_wPZLI8N</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2505725266</pqid></control><display><type>article</type><title>Learning to dance: A graph convolutional adversarial network to generate realistic dance motions from audio</title><source>Elsevier ScienceDirect Journals</source><creator>Ferreira, João P. ; Coutinho, Thiago M. ; Gomes, Thiago L. ; Neto, José F. ; Azevedo, Rafael ; Martins, Renato ; Nascimento, Erickson R.</creator><creatorcontrib>Ferreira, João P. ; Coutinho, Thiago M. ; Gomes, Thiago L. ; Neto, José F. ; Azevedo, Rafael ; Martins, Renato ; Nascimento, Erickson R.</creatorcontrib><description>[Display omitted] •A methodology for conditioned human motion generation.•A novel multimodal dataset that will be available for the community.•Our method shows that the use of GCN provides a more coherent approach for the data structure inherent to the problem.•Our approach generates realistic samples, achieving similar benchmarks to real motions on a perceptual study. Synthesizing human motion through learning techniques is becoming an increasingly popular approach to alleviating the requirement of new data capture to produce animations. Learning to move naturally from music, i.e., to dance, is one of the more complex motions humans often perform effortlessly. Each dance movement is unique, yet such movements maintain the core characteristics of the dance style. Most approaches addressing this problem with classical convolutional and recursive neural models undergo training and variability issues due to the non-Euclidean geometry of the motion manifold structure. In this paper, we design a novel method based on graph convolutional networks to tackle the problem of automatic dance generation from audio information. Our method uses an adversarial learning scheme conditioned on the input music audios to create natural motions preserving the key movements of different music styles. We evaluate our method with three quantitative metrics of generative methods and a user study. The results suggest that the proposed GCN model outperforms the state-of-the-art dance generation method conditioned on music in different experiments. Moreover, our graph-convolutional approach is simpler, easier to be trained, and capable of generating more realistic motion styles regarding qualitative and different quantitative metrics. It also presented a visual movement perceptual quality comparable to real motion data. The dataset and project are publicly available at: https://www.verlab.dcc.ufmg.br/motion-analysis/cag2020.</description><identifier>ISSN: 0097-8493</identifier><identifier>EISSN: 1873-7684</identifier><identifier>DOI: 10.1016/j.cag.2020.09.009</identifier><language>eng</language><publisher>Oxford: Elsevier Ltd</publisher><subject>Audio data ; Computer Science ; Computer Vision and Pattern Recognition ; Conditional adversarial nets ; Dance ; Data capture ; Euclidean geometry ; Graph convolutional neural networks ; Human motion ; Human motion generation ; Learning ; Multimodal learning ; Music ; Sound and dance processing</subject><ispartof>Computers &amp; graphics, 2021-02, Vol.94, p.11-21</ispartof><rights>2020 Elsevier Ltd</rights><rights>Copyright Elsevier Science Ltd. Feb 2021</rights><rights>Distributed under a Creative Commons Attribution 4.0 International License</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c359t-5e1385cdfe99fe96de78080dde0d63e3cec08c2335c5bf81e1abbfcbe9f1b6623</citedby><cites>FETCH-LOGICAL-c359t-5e1385cdfe99fe96de78080dde0d63e3cec08c2335c5bf81e1abbfcbe9f1b6623</cites><orcidid>0000-0002-8093-9880 ; 0000-0001-8378-9868 ; 0000-0003-0053-0004 ; 0000-0002-2684-1030</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.sciencedirect.com/science/article/pii/S0097849320301436$$EHTML$$P50$$Gelsevier$$H</linktohtml><link.rule.ids>230,314,776,780,881,3537,27901,27902,65306</link.rule.ids><backlink>$$Uhttps://inria.hal.science/hal-03257495$$DView record in HAL$$Hfree_for_read</backlink></links><search><creatorcontrib>Ferreira, João P.</creatorcontrib><creatorcontrib>Coutinho, Thiago M.</creatorcontrib><creatorcontrib>Gomes, Thiago L.</creatorcontrib><creatorcontrib>Neto, José F.</creatorcontrib><creatorcontrib>Azevedo, Rafael</creatorcontrib><creatorcontrib>Martins, Renato</creatorcontrib><creatorcontrib>Nascimento, Erickson R.</creatorcontrib><title>Learning to dance: A graph convolutional adversarial network to generate realistic dance motions from audio</title><title>Computers &amp; graphics</title><description>[Display omitted] •A methodology for conditioned human motion generation.•A novel multimodal dataset that will be available for the community.•Our method shows that the use of GCN provides a more coherent approach for the data structure inherent to the problem.•Our approach generates realistic samples, achieving similar benchmarks to real motions on a perceptual study. Synthesizing human motion through learning techniques is becoming an increasingly popular approach to alleviating the requirement of new data capture to produce animations. Learning to move naturally from music, i.e., to dance, is one of the more complex motions humans often perform effortlessly. Each dance movement is unique, yet such movements maintain the core characteristics of the dance style. Most approaches addressing this problem with classical convolutional and recursive neural models undergo training and variability issues due to the non-Euclidean geometry of the motion manifold structure. In this paper, we design a novel method based on graph convolutional networks to tackle the problem of automatic dance generation from audio information. Our method uses an adversarial learning scheme conditioned on the input music audios to create natural motions preserving the key movements of different music styles. We evaluate our method with three quantitative metrics of generative methods and a user study. The results suggest that the proposed GCN model outperforms the state-of-the-art dance generation method conditioned on music in different experiments. Moreover, our graph-convolutional approach is simpler, easier to be trained, and capable of generating more realistic motion styles regarding qualitative and different quantitative metrics. It also presented a visual movement perceptual quality comparable to real motion data. The dataset and project are publicly available at: https://www.verlab.dcc.ufmg.br/motion-analysis/cag2020.</description><subject>Audio data</subject><subject>Computer Science</subject><subject>Computer Vision and Pattern Recognition</subject><subject>Conditional adversarial nets</subject><subject>Dance</subject><subject>Data capture</subject><subject>Euclidean geometry</subject><subject>Graph convolutional neural networks</subject><subject>Human motion</subject><subject>Human motion generation</subject><subject>Learning</subject><subject>Multimodal learning</subject><subject>Music</subject><subject>Sound and dance processing</subject><issn>0097-8493</issn><issn>1873-7684</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><recordid>eNp9kMFq3DAQhkVpodu0D9CboKcc7IyklWwlpyWkTWAhl_YsZGm80cZrbSTvhr59ZVxyzEGMGP7vh_kI-c6gZsDU1b52dldz4FCDrgH0B7JibSOqRrXrj2RVNk3VrrX4TL7kvAcAztV6RZ63aNMYxh2dIvV2dHhNN3SX7PGJujie43CaQhztQK0_Y8o2hfIfcXqN6XlmdjhishPShHYIeQpuqaGHOIOZ9ikeqD35EL-ST70dMn77Py_In593v2_vq-3jr4fbzbZyQuqpkshEK53vUevylMemhRa8R_BKoHDooHVcCOlk17cMme263nWoe9YpxcUFuVx6n-xgjikcbPprog3mfrM18w4El81ayzMr2R9L9pjiywnzZPbxlMq92XAJsuGSK1VSbEm5FHNO2L_VMjCzf7M3xb-Z_RvQptguzM3CYDn1HDCZ7AIWMz4kdJPxMbxD_wPZLI8N</recordid><startdate>202102</startdate><enddate>202102</enddate><creator>Ferreira, João P.</creator><creator>Coutinho, Thiago M.</creator><creator>Gomes, Thiago L.</creator><creator>Neto, José F.</creator><creator>Azevedo, Rafael</creator><creator>Martins, Renato</creator><creator>Nascimento, Erickson R.</creator><general>Elsevier Ltd</general><general>Elsevier Science Ltd</general><general>Elsevier</general><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>1XC</scope><orcidid>https://orcid.org/0000-0002-8093-9880</orcidid><orcidid>https://orcid.org/0000-0001-8378-9868</orcidid><orcidid>https://orcid.org/0000-0003-0053-0004</orcidid><orcidid>https://orcid.org/0000-0002-2684-1030</orcidid></search><sort><creationdate>202102</creationdate><title>Learning to dance: A graph convolutional adversarial network to generate realistic dance motions from audio</title><author>Ferreira, João P. ; Coutinho, Thiago M. ; Gomes, Thiago L. ; Neto, José F. ; Azevedo, Rafael ; Martins, Renato ; Nascimento, Erickson R.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c359t-5e1385cdfe99fe96de78080dde0d63e3cec08c2335c5bf81e1abbfcbe9f1b6623</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Audio data</topic><topic>Computer Science</topic><topic>Computer Vision and Pattern Recognition</topic><topic>Conditional adversarial nets</topic><topic>Dance</topic><topic>Data capture</topic><topic>Euclidean geometry</topic><topic>Graph convolutional neural networks</topic><topic>Human motion</topic><topic>Human motion generation</topic><topic>Learning</topic><topic>Multimodal learning</topic><topic>Music</topic><topic>Sound and dance processing</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Ferreira, João P.</creatorcontrib><creatorcontrib>Coutinho, Thiago M.</creatorcontrib><creatorcontrib>Gomes, Thiago L.</creatorcontrib><creatorcontrib>Neto, José F.</creatorcontrib><creatorcontrib>Azevedo, Rafael</creatorcontrib><creatorcontrib>Martins, Renato</creatorcontrib><creatorcontrib>Nascimento, Erickson R.</creatorcontrib><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>Hyper Article en Ligne (HAL)</collection><jtitle>Computers &amp; graphics</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Ferreira, João P.</au><au>Coutinho, Thiago M.</au><au>Gomes, Thiago L.</au><au>Neto, José F.</au><au>Azevedo, Rafael</au><au>Martins, Renato</au><au>Nascimento, Erickson R.</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Learning to dance: A graph convolutional adversarial network to generate realistic dance motions from audio</atitle><jtitle>Computers &amp; graphics</jtitle><date>2021-02</date><risdate>2021</risdate><volume>94</volume><spage>11</spage><epage>21</epage><pages>11-21</pages><issn>0097-8493</issn><eissn>1873-7684</eissn><abstract>[Display omitted] •A methodology for conditioned human motion generation.•A novel multimodal dataset that will be available for the community.•Our method shows that the use of GCN provides a more coherent approach for the data structure inherent to the problem.•Our approach generates realistic samples, achieving similar benchmarks to real motions on a perceptual study. Synthesizing human motion through learning techniques is becoming an increasingly popular approach to alleviating the requirement of new data capture to produce animations. Learning to move naturally from music, i.e., to dance, is one of the more complex motions humans often perform effortlessly. Each dance movement is unique, yet such movements maintain the core characteristics of the dance style. Most approaches addressing this problem with classical convolutional and recursive neural models undergo training and variability issues due to the non-Euclidean geometry of the motion manifold structure. In this paper, we design a novel method based on graph convolutional networks to tackle the problem of automatic dance generation from audio information. Our method uses an adversarial learning scheme conditioned on the input music audios to create natural motions preserving the key movements of different music styles. We evaluate our method with three quantitative metrics of generative methods and a user study. The results suggest that the proposed GCN model outperforms the state-of-the-art dance generation method conditioned on music in different experiments. Moreover, our graph-convolutional approach is simpler, easier to be trained, and capable of generating more realistic motion styles regarding qualitative and different quantitative metrics. It also presented a visual movement perceptual quality comparable to real motion data. The dataset and project are publicly available at: https://www.verlab.dcc.ufmg.br/motion-analysis/cag2020.</abstract><cop>Oxford</cop><pub>Elsevier Ltd</pub><doi>10.1016/j.cag.2020.09.009</doi><tpages>11</tpages><orcidid>https://orcid.org/0000-0002-8093-9880</orcidid><orcidid>https://orcid.org/0000-0001-8378-9868</orcidid><orcidid>https://orcid.org/0000-0003-0053-0004</orcidid><orcidid>https://orcid.org/0000-0002-2684-1030</orcidid></addata></record>
fulltext fulltext
identifier ISSN: 0097-8493
ispartof Computers & graphics, 2021-02, Vol.94, p.11-21
issn 0097-8493
1873-7684
language eng
recordid cdi_hal_primary_oai_HAL_hal_03257495v1
source Elsevier ScienceDirect Journals
subjects Audio data
Computer Science
Computer Vision and Pattern Recognition
Conditional adversarial nets
Dance
Data capture
Euclidean geometry
Graph convolutional neural networks
Human motion
Human motion generation
Learning
Multimodal learning
Music
Sound and dance processing
title Learning to dance: A graph convolutional adversarial network to generate realistic dance motions from audio
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-28T17%3A05%3A29IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_hal_p&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Learning%20to%20dance:%20A%20graph%20convolutional%20adversarial%20network%20to%20generate%20realistic%20dance%20motions%20from%20audio&rft.jtitle=Computers%20&%20graphics&rft.au=Ferreira,%20Jo%C3%A3o%20P.&rft.date=2021-02&rft.volume=94&rft.spage=11&rft.epage=21&rft.pages=11-21&rft.issn=0097-8493&rft.eissn=1873-7684&rft_id=info:doi/10.1016/j.cag.2020.09.009&rft_dat=%3Cproquest_hal_p%3E2505725266%3C/proquest_hal_p%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2505725266&rft_id=info:pmid/&rft_els_id=S0097849320301436&rfr_iscdi=true