Generative deep learning applied to biomechanics: A new augmentation technique for motion capture datasets

Deep learning biomechanical models perform optimally when trained with large datasets, however these can be challenging to collect in gait labs, while limited augmentation techniques are available. This study presents a data augmentation approach based on generative adversarial networks which genera...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Journal of biomechanics 2022-11, Vol.144, p.111301-111301, Article 111301
Hauptverfasser: Bicer, Metin, Phillips, Andrew T.M., Melis, Alessandro, McGregor, Alison H., Modenese, Luca
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 111301
container_issue
container_start_page 111301
container_title Journal of biomechanics
container_volume 144
creator Bicer, Metin
Phillips, Andrew T.M.
Melis, Alessandro
McGregor, Alison H.
Modenese, Luca
description Deep learning biomechanical models perform optimally when trained with large datasets, however these can be challenging to collect in gait labs, while limited augmentation techniques are available. This study presents a data augmentation approach based on generative adversarial networks which generate synthetic motion capture (mocap) datasets of marker trajectories and ground reaction forces (GRFs). The proposed architecture, called adversarial autoencoder, consists of an encoder compressing mocap data to a latent vector, a decoder reconstructing the mocap data from the latent vector and a discriminator distinguishing random vectors from encoded latent vectors. Direct kinematics (DK) and inverse kinematics (IK) joint angles, GRFs, and inverse dynamics (ID) joint moments calculated for real and synthetic trials were compared using statistical parametric mapping to assure realistic data generation and select optimal architectural hyperparameters based on percentage average differences across the gait cycle length. We observed negligible differences for DK computed joint angles and GRFs, but not for inverse methods (IK: 29.2%, ID: 35.5%). When the same architecture was trained also including the joint angles calculated by IK, we found no significant differences in the kinematics and GRFs, and improvements in joint moments estimation (ID: 25.7%). Finally, we showed that our data augmentation approach improved the accuracy of joint kinematics (up to 23%, 0.8°) and vertical GRFs (11%) predicted by standard neural networks using a single simulated pelvic inertial measurement unit. These findings suggest that predictive deep learning models can benefit from the synthetic datasets produced with the proposed technique.
doi_str_mv 10.1016/j.jbiomech.2022.111301
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_miscellaneous_2723158487</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><els_id>S0021929022003426</els_id><sourcerecordid>2723158487</sourcerecordid><originalsourceid>FETCH-LOGICAL-c421t-b7a2c4ca303f2ed923c921fba37e16b1c7d5a0f4734a37c3ef05107b055d4d2d3</originalsourceid><addsrcrecordid>eNqFUcFu1TAQtBBIPB78ArLEhUted-3k-YUTVVUKUiUucLYce1McJXawnSL-HpdXLlx6Wml2ZjS7w9hbhAMCHi-mwzT4uJD9cRAgxAERJeAztsOTko2QJ3jOdgACm1708JK9ynkCANWqfsemGwqUTPH3xB3RymcyKfhwx826zp4cL5E_2pvgbf7AL3mgX9xsdwuFUpUx8FK3wf_ciI8x8SX-Ba1Zy5aqrSkmU8mv2YvRzJnePM49-_7p-tvV5-b2682Xq8vbxrYCSzMoI2xrjQQ5CnK9kLYXOA5GKsLjgFa5zsDYKtlWyEoaoUNQA3Sda51wcs_en33XFGukXPTis6V5NoHilrVQQmJ3aut39uzdf9QpbinUdJUl4ajg1GFlHc8sm2LOiUa9Jr-Y9Fsj6IcK9KT_VaAfKtDnCqrw41lI9dx7T0ln6ylYcj6RLdpF_5TFH8JXk5M</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2730670851</pqid></control><display><type>article</type><title>Generative deep learning applied to biomechanics: A new augmentation technique for motion capture datasets</title><source>Elsevier ScienceDirect Journals Complete</source><creator>Bicer, Metin ; Phillips, Andrew T.M. ; Melis, Alessandro ; McGregor, Alison H. ; Modenese, Luca</creator><creatorcontrib>Bicer, Metin ; Phillips, Andrew T.M. ; Melis, Alessandro ; McGregor, Alison H. ; Modenese, Luca</creatorcontrib><description>Deep learning biomechanical models perform optimally when trained with large datasets, however these can be challenging to collect in gait labs, while limited augmentation techniques are available. This study presents a data augmentation approach based on generative adversarial networks which generate synthetic motion capture (mocap) datasets of marker trajectories and ground reaction forces (GRFs). The proposed architecture, called adversarial autoencoder, consists of an encoder compressing mocap data to a latent vector, a decoder reconstructing the mocap data from the latent vector and a discriminator distinguishing random vectors from encoded latent vectors. Direct kinematics (DK) and inverse kinematics (IK) joint angles, GRFs, and inverse dynamics (ID) joint moments calculated for real and synthetic trials were compared using statistical parametric mapping to assure realistic data generation and select optimal architectural hyperparameters based on percentage average differences across the gait cycle length. We observed negligible differences for DK computed joint angles and GRFs, but not for inverse methods (IK: 29.2%, ID: 35.5%). When the same architecture was trained also including the joint angles calculated by IK, we found no significant differences in the kinematics and GRFs, and improvements in joint moments estimation (ID: 25.7%). Finally, we showed that our data augmentation approach improved the accuracy of joint kinematics (up to 23%, 0.8°) and vertical GRFs (11%) predicted by standard neural networks using a single simulated pelvic inertial measurement unit. These findings suggest that predictive deep learning models can benefit from the synthetic datasets produced with the proposed technique.</description><identifier>ISSN: 0021-9290</identifier><identifier>EISSN: 1873-2380</identifier><identifier>DOI: 10.1016/j.jbiomech.2022.111301</identifier><language>eng</language><publisher>Kidlington: Elsevier Ltd</publisher><subject>Biomechanics ; Coders ; Data augmentation ; Data collection ; Data compression ; Datasets ; Deep learning ; Gait ; Generative adversarial networks ; Inertial platforms ; Inverse dynamics ; Inverse kinematics ; Kinematics ; Kinetics ; Motion capture ; Neural networks ; Optimization ; Sensors</subject><ispartof>Journal of biomechanics, 2022-11, Vol.144, p.111301-111301, Article 111301</ispartof><rights>2022 The Authors</rights><rights>2022. The Authors</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c421t-b7a2c4ca303f2ed923c921fba37e16b1c7d5a0f4734a37c3ef05107b055d4d2d3</citedby><cites>FETCH-LOGICAL-c421t-b7a2c4ca303f2ed923c921fba37e16b1c7d5a0f4734a37c3ef05107b055d4d2d3</cites><orcidid>0000-0002-8261-0421 ; 0000-0001-6618-0145 ; 0000-0003-1402-5359 ; 0000-0003-4672-332X ; 0000-0002-9491-2080</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.sciencedirect.com/science/article/pii/S0021929022003426$$EHTML$$P50$$Gelsevier$$Hfree_for_read</linktohtml><link.rule.ids>314,776,780,3537,27901,27902,65534</link.rule.ids></links><search><creatorcontrib>Bicer, Metin</creatorcontrib><creatorcontrib>Phillips, Andrew T.M.</creatorcontrib><creatorcontrib>Melis, Alessandro</creatorcontrib><creatorcontrib>McGregor, Alison H.</creatorcontrib><creatorcontrib>Modenese, Luca</creatorcontrib><title>Generative deep learning applied to biomechanics: A new augmentation technique for motion capture datasets</title><title>Journal of biomechanics</title><description>Deep learning biomechanical models perform optimally when trained with large datasets, however these can be challenging to collect in gait labs, while limited augmentation techniques are available. This study presents a data augmentation approach based on generative adversarial networks which generate synthetic motion capture (mocap) datasets of marker trajectories and ground reaction forces (GRFs). The proposed architecture, called adversarial autoencoder, consists of an encoder compressing mocap data to a latent vector, a decoder reconstructing the mocap data from the latent vector and a discriminator distinguishing random vectors from encoded latent vectors. Direct kinematics (DK) and inverse kinematics (IK) joint angles, GRFs, and inverse dynamics (ID) joint moments calculated for real and synthetic trials were compared using statistical parametric mapping to assure realistic data generation and select optimal architectural hyperparameters based on percentage average differences across the gait cycle length. We observed negligible differences for DK computed joint angles and GRFs, but not for inverse methods (IK: 29.2%, ID: 35.5%). When the same architecture was trained also including the joint angles calculated by IK, we found no significant differences in the kinematics and GRFs, and improvements in joint moments estimation (ID: 25.7%). Finally, we showed that our data augmentation approach improved the accuracy of joint kinematics (up to 23%, 0.8°) and vertical GRFs (11%) predicted by standard neural networks using a single simulated pelvic inertial measurement unit. These findings suggest that predictive deep learning models can benefit from the synthetic datasets produced with the proposed technique.</description><subject>Biomechanics</subject><subject>Coders</subject><subject>Data augmentation</subject><subject>Data collection</subject><subject>Data compression</subject><subject>Datasets</subject><subject>Deep learning</subject><subject>Gait</subject><subject>Generative adversarial networks</subject><subject>Inertial platforms</subject><subject>Inverse dynamics</subject><subject>Inverse kinematics</subject><subject>Kinematics</subject><subject>Kinetics</subject><subject>Motion capture</subject><subject>Neural networks</subject><subject>Optimization</subject><subject>Sensors</subject><issn>0021-9290</issn><issn>1873-2380</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>8G5</sourceid><sourceid>BENPR</sourceid><sourceid>GUQSH</sourceid><sourceid>M2O</sourceid><recordid>eNqFUcFu1TAQtBBIPB78ArLEhUted-3k-YUTVVUKUiUucLYce1McJXawnSL-HpdXLlx6Wml2ZjS7w9hbhAMCHi-mwzT4uJD9cRAgxAERJeAztsOTko2QJ3jOdgACm1708JK9ynkCANWqfsemGwqUTPH3xB3RymcyKfhwx826zp4cL5E_2pvgbf7AL3mgX9xsdwuFUpUx8FK3wf_ciI8x8SX-Ba1Zy5aqrSkmU8mv2YvRzJnePM49-_7p-tvV5-b2682Xq8vbxrYCSzMoI2xrjQQ5CnK9kLYXOA5GKsLjgFa5zsDYKtlWyEoaoUNQA3Sda51wcs_en33XFGukXPTis6V5NoHilrVQQmJ3aut39uzdf9QpbinUdJUl4ajg1GFlHc8sm2LOiUa9Jr-Y9Fsj6IcK9KT_VaAfKtDnCqrw41lI9dx7T0ln6ylYcj6RLdpF_5TFH8JXk5M</recordid><startdate>202211</startdate><enddate>202211</enddate><creator>Bicer, Metin</creator><creator>Phillips, Andrew T.M.</creator><creator>Melis, Alessandro</creator><creator>McGregor, Alison H.</creator><creator>Modenese, Luca</creator><general>Elsevier Ltd</general><general>Elsevier Limited</general><scope>6I.</scope><scope>AAFTH</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>3V.</scope><scope>7QP</scope><scope>7TB</scope><scope>7TS</scope><scope>7X7</scope><scope>7XB</scope><scope>88E</scope><scope>8AO</scope><scope>8FD</scope><scope>8FE</scope><scope>8FH</scope><scope>8FI</scope><scope>8FJ</scope><scope>8FK</scope><scope>8G5</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BBNVY</scope><scope>BENPR</scope><scope>BHPHI</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>FR3</scope><scope>FYUFA</scope><scope>GHDGH</scope><scope>GNUQQ</scope><scope>GUQSH</scope><scope>HCIFZ</scope><scope>K9.</scope><scope>LK8</scope><scope>M0S</scope><scope>M1P</scope><scope>M2O</scope><scope>M7P</scope><scope>MBDVC</scope><scope>PHGZM</scope><scope>PHGZT</scope><scope>PJZUB</scope><scope>PKEHL</scope><scope>PPXIY</scope><scope>PQEST</scope><scope>PQGLB</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>Q9U</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0002-8261-0421</orcidid><orcidid>https://orcid.org/0000-0001-6618-0145</orcidid><orcidid>https://orcid.org/0000-0003-1402-5359</orcidid><orcidid>https://orcid.org/0000-0003-4672-332X</orcidid><orcidid>https://orcid.org/0000-0002-9491-2080</orcidid></search><sort><creationdate>202211</creationdate><title>Generative deep learning applied to biomechanics: A new augmentation technique for motion capture datasets</title><author>Bicer, Metin ; Phillips, Andrew T.M. ; Melis, Alessandro ; McGregor, Alison H. ; Modenese, Luca</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c421t-b7a2c4ca303f2ed923c921fba37e16b1c7d5a0f4734a37c3ef05107b055d4d2d3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Biomechanics</topic><topic>Coders</topic><topic>Data augmentation</topic><topic>Data collection</topic><topic>Data compression</topic><topic>Datasets</topic><topic>Deep learning</topic><topic>Gait</topic><topic>Generative adversarial networks</topic><topic>Inertial platforms</topic><topic>Inverse dynamics</topic><topic>Inverse kinematics</topic><topic>Kinematics</topic><topic>Kinetics</topic><topic>Motion capture</topic><topic>Neural networks</topic><topic>Optimization</topic><topic>Sensors</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Bicer, Metin</creatorcontrib><creatorcontrib>Phillips, Andrew T.M.</creatorcontrib><creatorcontrib>Melis, Alessandro</creatorcontrib><creatorcontrib>McGregor, Alison H.</creatorcontrib><creatorcontrib>Modenese, Luca</creatorcontrib><collection>ScienceDirect Open Access Titles</collection><collection>Elsevier:ScienceDirect:Open Access</collection><collection>CrossRef</collection><collection>ProQuest Central (Corporate)</collection><collection>Calcium &amp; Calcified Tissue Abstracts</collection><collection>Mechanical &amp; Transportation Engineering Abstracts</collection><collection>Physical Education Index</collection><collection>Health &amp; Medical Collection</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>Medical Database (Alumni Edition)</collection><collection>ProQuest Pharma Collection</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Natural Science Collection</collection><collection>Hospital Premium Collection</collection><collection>Hospital Premium Collection (Alumni Edition)</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>Research Library (Alumni Edition)</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>Biological Science Collection</collection><collection>ProQuest Central</collection><collection>Natural Science Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>Engineering Research Database</collection><collection>Health Research Premium Collection</collection><collection>Health Research Premium Collection (Alumni)</collection><collection>ProQuest Central Student</collection><collection>Research Library Prep</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Health &amp; Medical Complete (Alumni)</collection><collection>ProQuest Biological Science Collection</collection><collection>Health &amp; Medical Collection (Alumni Edition)</collection><collection>Medical Database</collection><collection>Research Library</collection><collection>Biological Science Database</collection><collection>Research Library (Corporate)</collection><collection>ProQuest Central (New)</collection><collection>ProQuest One Academic (New)</collection><collection>ProQuest Health &amp; Medical Research Collection</collection><collection>ProQuest One Academic Middle East (New)</collection><collection>ProQuest One Health &amp; Nursing</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Applied &amp; Life Sciences</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central Basic</collection><collection>MEDLINE - Academic</collection><jtitle>Journal of biomechanics</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Bicer, Metin</au><au>Phillips, Andrew T.M.</au><au>Melis, Alessandro</au><au>McGregor, Alison H.</au><au>Modenese, Luca</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Generative deep learning applied to biomechanics: A new augmentation technique for motion capture datasets</atitle><jtitle>Journal of biomechanics</jtitle><date>2022-11</date><risdate>2022</risdate><volume>144</volume><spage>111301</spage><epage>111301</epage><pages>111301-111301</pages><artnum>111301</artnum><issn>0021-9290</issn><eissn>1873-2380</eissn><abstract>Deep learning biomechanical models perform optimally when trained with large datasets, however these can be challenging to collect in gait labs, while limited augmentation techniques are available. This study presents a data augmentation approach based on generative adversarial networks which generate synthetic motion capture (mocap) datasets of marker trajectories and ground reaction forces (GRFs). The proposed architecture, called adversarial autoencoder, consists of an encoder compressing mocap data to a latent vector, a decoder reconstructing the mocap data from the latent vector and a discriminator distinguishing random vectors from encoded latent vectors. Direct kinematics (DK) and inverse kinematics (IK) joint angles, GRFs, and inverse dynamics (ID) joint moments calculated for real and synthetic trials were compared using statistical parametric mapping to assure realistic data generation and select optimal architectural hyperparameters based on percentage average differences across the gait cycle length. We observed negligible differences for DK computed joint angles and GRFs, but not for inverse methods (IK: 29.2%, ID: 35.5%). When the same architecture was trained also including the joint angles calculated by IK, we found no significant differences in the kinematics and GRFs, and improvements in joint moments estimation (ID: 25.7%). Finally, we showed that our data augmentation approach improved the accuracy of joint kinematics (up to 23%, 0.8°) and vertical GRFs (11%) predicted by standard neural networks using a single simulated pelvic inertial measurement unit. These findings suggest that predictive deep learning models can benefit from the synthetic datasets produced with the proposed technique.</abstract><cop>Kidlington</cop><pub>Elsevier Ltd</pub><doi>10.1016/j.jbiomech.2022.111301</doi><tpages>1</tpages><orcidid>https://orcid.org/0000-0002-8261-0421</orcidid><orcidid>https://orcid.org/0000-0001-6618-0145</orcidid><orcidid>https://orcid.org/0000-0003-1402-5359</orcidid><orcidid>https://orcid.org/0000-0003-4672-332X</orcidid><orcidid>https://orcid.org/0000-0002-9491-2080</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 0021-9290
ispartof Journal of biomechanics, 2022-11, Vol.144, p.111301-111301, Article 111301
issn 0021-9290
1873-2380
language eng
recordid cdi_proquest_miscellaneous_2723158487
source Elsevier ScienceDirect Journals Complete
subjects Biomechanics
Coders
Data augmentation
Data collection
Data compression
Datasets
Deep learning
Gait
Generative adversarial networks
Inertial platforms
Inverse dynamics
Inverse kinematics
Kinematics
Kinetics
Motion capture
Neural networks
Optimization
Sensors
title Generative deep learning applied to biomechanics: A new augmentation technique for motion capture datasets
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-21T19%3A11%3A29IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Generative%20deep%20learning%20applied%20to%20biomechanics:%20A%20new%20augmentation%20technique%20for%20motion%20capture%20datasets&rft.jtitle=Journal%20of%20biomechanics&rft.au=Bicer,%20Metin&rft.date=2022-11&rft.volume=144&rft.spage=111301&rft.epage=111301&rft.pages=111301-111301&rft.artnum=111301&rft.issn=0021-9290&rft.eissn=1873-2380&rft_id=info:doi/10.1016/j.jbiomech.2022.111301&rft_dat=%3Cproquest_cross%3E2723158487%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2730670851&rft_id=info:pmid/&rft_els_id=S0021929022003426&rfr_iscdi=true