Muscle‐driven virtual human motion generation approach based on deep reinforcement learning

We propose a muscle‐driven motion generation approach to realize virtual human motion with user interaction and higher fidelity, which can address the problem that the joint‐driven fails to reflect the motion process of the human body. First, a simplified virtual human musculoskeletal model is built...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Computer animation and virtual worlds 2022-06, Vol.33 (3-4), p.n/a
Hauptverfasser: Qin, Wenhu, Tao, Ran, Sun, Libo, Dong, Kaiyue
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page n/a
container_issue 3-4
container_start_page
container_title Computer animation and virtual worlds
container_volume 33
creator Qin, Wenhu
Tao, Ran
Sun, Libo
Dong, Kaiyue
description We propose a muscle‐driven motion generation approach to realize virtual human motion with user interaction and higher fidelity, which can address the problem that the joint‐driven fails to reflect the motion process of the human body. First, a simplified virtual human musculoskeletal model is built based on human biomechanics. Then, a hierarchical policy learning framework is constructed including motion tracking layer, SPD controller and muscle control layer. The motion tracking layer is responsible for mimicking reference motion and completing control command, using proximal policy optimization to train the policy; the muscle control layer is aimed to minimize muscle energy consumption and train the policy based on supervised learning; the SPD controller acts as a link between the two layers. At the same time, we integrate the curriculum learning to improve the efficiency and success rate of policy training. Simulation experiments show that the proposed approach can use motion capture data and pose estimation data as reference motions to generate better and more adaptable motions. Furthermore, the virtual human has the ability to respond to the user control command during the motion, and can complete the target task successfully. Using the pose estimation motion data obtained from the video as a reference motion for imitation. (a) The jumping video we downloaded from the web. (b) Our imitation motion of jumping.
doi_str_mv 10.1002/cav.2092
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2685867652</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2685867652</sourcerecordid><originalsourceid>FETCH-LOGICAL-c2232-583049a65a213e9b82c8ba20f6f079be64028a64d6dbb7897c7bd3d7fa6b712a3</originalsourceid><addsrcrecordid>eNp10M1Kw0AQB_AgCtYq-AgLXryk7m6S3eRYil9Q8aLiRZbZzaRNSTZxN6n05iP4jD6JaSvePM0w_JgZ_kFwzuiEUcqvDKwnnGb8IBixJBZhzOXr4V8v2HFw4v1qkIIzOgreHnpvKvz-_MpduUZL1qXreqjIsq_BkrrpysaSBVp0sGuhbV0DZkk0eMzJMMkRW-KwtEXjDNZoO1IhOFvaxWlwVEDl8ey3joPnm-un2V04f7y9n03noeE84mGSRjTOQCTAWYSZTrlJNXBaiILKTKOIKU9BxLnItZZpJo3UeZTLAoSWjEM0Di72e4ff3nv0nVo1vbPDScVFmqRCioQP6nKvjGu8d1io1pU1uI1iVG3DU0N4ahveQMM9_Sgr3Pzr1Gz6svM_cixyIw</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2685867652</pqid></control><display><type>article</type><title>Muscle‐driven virtual human motion generation approach based on deep reinforcement learning</title><source>Wiley Online Library</source><creator>Qin, Wenhu ; Tao, Ran ; Sun, Libo ; Dong, Kaiyue</creator><creatorcontrib>Qin, Wenhu ; Tao, Ran ; Sun, Libo ; Dong, Kaiyue</creatorcontrib><description>We propose a muscle‐driven motion generation approach to realize virtual human motion with user interaction and higher fidelity, which can address the problem that the joint‐driven fails to reflect the motion process of the human body. First, a simplified virtual human musculoskeletal model is built based on human biomechanics. Then, a hierarchical policy learning framework is constructed including motion tracking layer, SPD controller and muscle control layer. The motion tracking layer is responsible for mimicking reference motion and completing control command, using proximal policy optimization to train the policy; the muscle control layer is aimed to minimize muscle energy consumption and train the policy based on supervised learning; the SPD controller acts as a link between the two layers. At the same time, we integrate the curriculum learning to improve the efficiency and success rate of policy training. Simulation experiments show that the proposed approach can use motion capture data and pose estimation data as reference motions to generate better and more adaptable motions. Furthermore, the virtual human has the ability to respond to the user control command during the motion, and can complete the target task successfully. Using the pose estimation motion data obtained from the video as a reference motion for imitation. (a) The jumping video we downloaded from the web. (b) Our imitation motion of jumping.</description><identifier>ISSN: 1546-4261</identifier><identifier>EISSN: 1546-427X</identifier><identifier>DOI: 10.1002/cav.2092</identifier><language>eng</language><publisher>Hoboken, USA: John Wiley &amp; Sons, Inc</publisher><subject>Biomechanics ; Controllers ; curriculum learning ; Deep learning ; deep reinforcement learning ; Energy consumption ; Human motion ; Human performance ; Machine learning ; Motion capture ; motion generation ; Muscles ; musculoskeletal model ; Optimization ; Pose estimation ; Tracking control ; Virtual humans</subject><ispartof>Computer animation and virtual worlds, 2022-06, Vol.33 (3-4), p.n/a</ispartof><rights>2022 John Wiley &amp; Sons, Ltd.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c2232-583049a65a213e9b82c8ba20f6f079be64028a64d6dbb7897c7bd3d7fa6b712a3</citedby><cites>FETCH-LOGICAL-c2232-583049a65a213e9b82c8ba20f6f079be64028a64d6dbb7897c7bd3d7fa6b712a3</cites><orcidid>0000-0003-1734-3336 ; 0000-0002-9265-7397 ; 0000-0002-7838-9410 ; 0000-0003-1407-5591</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://onlinelibrary.wiley.com/doi/pdf/10.1002%2Fcav.2092$$EPDF$$P50$$Gwiley$$H</linktopdf><linktohtml>$$Uhttps://onlinelibrary.wiley.com/doi/full/10.1002%2Fcav.2092$$EHTML$$P50$$Gwiley$$H</linktohtml><link.rule.ids>314,776,780,1411,27901,27902,45550,45551</link.rule.ids></links><search><creatorcontrib>Qin, Wenhu</creatorcontrib><creatorcontrib>Tao, Ran</creatorcontrib><creatorcontrib>Sun, Libo</creatorcontrib><creatorcontrib>Dong, Kaiyue</creatorcontrib><title>Muscle‐driven virtual human motion generation approach based on deep reinforcement learning</title><title>Computer animation and virtual worlds</title><description>We propose a muscle‐driven motion generation approach to realize virtual human motion with user interaction and higher fidelity, which can address the problem that the joint‐driven fails to reflect the motion process of the human body. First, a simplified virtual human musculoskeletal model is built based on human biomechanics. Then, a hierarchical policy learning framework is constructed including motion tracking layer, SPD controller and muscle control layer. The motion tracking layer is responsible for mimicking reference motion and completing control command, using proximal policy optimization to train the policy; the muscle control layer is aimed to minimize muscle energy consumption and train the policy based on supervised learning; the SPD controller acts as a link between the two layers. At the same time, we integrate the curriculum learning to improve the efficiency and success rate of policy training. Simulation experiments show that the proposed approach can use motion capture data and pose estimation data as reference motions to generate better and more adaptable motions. Furthermore, the virtual human has the ability to respond to the user control command during the motion, and can complete the target task successfully. Using the pose estimation motion data obtained from the video as a reference motion for imitation. (a) The jumping video we downloaded from the web. (b) Our imitation motion of jumping.</description><subject>Biomechanics</subject><subject>Controllers</subject><subject>curriculum learning</subject><subject>Deep learning</subject><subject>deep reinforcement learning</subject><subject>Energy consumption</subject><subject>Human motion</subject><subject>Human performance</subject><subject>Machine learning</subject><subject>Motion capture</subject><subject>motion generation</subject><subject>Muscles</subject><subject>musculoskeletal model</subject><subject>Optimization</subject><subject>Pose estimation</subject><subject>Tracking control</subject><subject>Virtual humans</subject><issn>1546-4261</issn><issn>1546-427X</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><recordid>eNp10M1Kw0AQB_AgCtYq-AgLXryk7m6S3eRYil9Q8aLiRZbZzaRNSTZxN6n05iP4jD6JaSvePM0w_JgZ_kFwzuiEUcqvDKwnnGb8IBixJBZhzOXr4V8v2HFw4v1qkIIzOgreHnpvKvz-_MpduUZL1qXreqjIsq_BkrrpysaSBVp0sGuhbV0DZkk0eMzJMMkRW-KwtEXjDNZoO1IhOFvaxWlwVEDl8ey3joPnm-un2V04f7y9n03noeE84mGSRjTOQCTAWYSZTrlJNXBaiILKTKOIKU9BxLnItZZpJo3UeZTLAoSWjEM0Di72e4ff3nv0nVo1vbPDScVFmqRCioQP6nKvjGu8d1io1pU1uI1iVG3DU0N4ahveQMM9_Sgr3Pzr1Gz6svM_cixyIw</recordid><startdate>202206</startdate><enddate>202206</enddate><creator>Qin, Wenhu</creator><creator>Tao, Ran</creator><creator>Sun, Libo</creator><creator>Dong, Kaiyue</creator><general>John Wiley &amp; Sons, Inc</general><general>Wiley Subscription Services, Inc</general><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><orcidid>https://orcid.org/0000-0003-1734-3336</orcidid><orcidid>https://orcid.org/0000-0002-9265-7397</orcidid><orcidid>https://orcid.org/0000-0002-7838-9410</orcidid><orcidid>https://orcid.org/0000-0003-1407-5591</orcidid></search><sort><creationdate>202206</creationdate><title>Muscle‐driven virtual human motion generation approach based on deep reinforcement learning</title><author>Qin, Wenhu ; Tao, Ran ; Sun, Libo ; Dong, Kaiyue</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c2232-583049a65a213e9b82c8ba20f6f079be64028a64d6dbb7897c7bd3d7fa6b712a3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Biomechanics</topic><topic>Controllers</topic><topic>curriculum learning</topic><topic>Deep learning</topic><topic>deep reinforcement learning</topic><topic>Energy consumption</topic><topic>Human motion</topic><topic>Human performance</topic><topic>Machine learning</topic><topic>Motion capture</topic><topic>motion generation</topic><topic>Muscles</topic><topic>musculoskeletal model</topic><topic>Optimization</topic><topic>Pose estimation</topic><topic>Tracking control</topic><topic>Virtual humans</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Qin, Wenhu</creatorcontrib><creatorcontrib>Tao, Ran</creatorcontrib><creatorcontrib>Sun, Libo</creatorcontrib><creatorcontrib>Dong, Kaiyue</creatorcontrib><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>Computer animation and virtual worlds</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Qin, Wenhu</au><au>Tao, Ran</au><au>Sun, Libo</au><au>Dong, Kaiyue</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Muscle‐driven virtual human motion generation approach based on deep reinforcement learning</atitle><jtitle>Computer animation and virtual worlds</jtitle><date>2022-06</date><risdate>2022</risdate><volume>33</volume><issue>3-4</issue><epage>n/a</epage><issn>1546-4261</issn><eissn>1546-427X</eissn><abstract>We propose a muscle‐driven motion generation approach to realize virtual human motion with user interaction and higher fidelity, which can address the problem that the joint‐driven fails to reflect the motion process of the human body. First, a simplified virtual human musculoskeletal model is built based on human biomechanics. Then, a hierarchical policy learning framework is constructed including motion tracking layer, SPD controller and muscle control layer. The motion tracking layer is responsible for mimicking reference motion and completing control command, using proximal policy optimization to train the policy; the muscle control layer is aimed to minimize muscle energy consumption and train the policy based on supervised learning; the SPD controller acts as a link between the two layers. At the same time, we integrate the curriculum learning to improve the efficiency and success rate of policy training. Simulation experiments show that the proposed approach can use motion capture data and pose estimation data as reference motions to generate better and more adaptable motions. Furthermore, the virtual human has the ability to respond to the user control command during the motion, and can complete the target task successfully. Using the pose estimation motion data obtained from the video as a reference motion for imitation. (a) The jumping video we downloaded from the web. (b) Our imitation motion of jumping.</abstract><cop>Hoboken, USA</cop><pub>John Wiley &amp; Sons, Inc</pub><doi>10.1002/cav.2092</doi><tpages>15</tpages><orcidid>https://orcid.org/0000-0003-1734-3336</orcidid><orcidid>https://orcid.org/0000-0002-9265-7397</orcidid><orcidid>https://orcid.org/0000-0002-7838-9410</orcidid><orcidid>https://orcid.org/0000-0003-1407-5591</orcidid></addata></record>
fulltext fulltext
identifier ISSN: 1546-4261
ispartof Computer animation and virtual worlds, 2022-06, Vol.33 (3-4), p.n/a
issn 1546-4261
1546-427X
language eng
recordid cdi_proquest_journals_2685867652
source Wiley Online Library
subjects Biomechanics
Controllers
curriculum learning
Deep learning
deep reinforcement learning
Energy consumption
Human motion
Human performance
Machine learning
Motion capture
motion generation
Muscles
musculoskeletal model
Optimization
Pose estimation
Tracking control
Virtual humans
title Muscle‐driven virtual human motion generation approach based on deep reinforcement learning
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-04T14%3A20%3A00IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Muscle%E2%80%90driven%20virtual%20human%20motion%20generation%20approach%20based%20on%20deep%20reinforcement%20learning&rft.jtitle=Computer%20animation%20and%20virtual%20worlds&rft.au=Qin,%20Wenhu&rft.date=2022-06&rft.volume=33&rft.issue=3-4&rft.epage=n/a&rft.issn=1546-4261&rft.eissn=1546-427X&rft_id=info:doi/10.1002/cav.2092&rft_dat=%3Cproquest_cross%3E2685867652%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2685867652&rft_id=info:pmid/&rfr_iscdi=true