Generative adversarial networks to create synthetic motion capture datasets including subject and gait characteristics
Resource-intensive motion capture (mocap) systems challenge predictive deep learning applications, requiring large and diverse datasets. We tackled this by modifying generative adversarial networks (GANs) into conditional GANs (cGANs) that can generate diverse mocap data, including 15 marker traject...
Gespeichert in:
Veröffentlicht in: | Journal of biomechanics 2024-12, Vol.177, p.112358, Article 112358 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Resource-intensive motion capture (mocap) systems challenge predictive deep learning applications, requiring large and diverse datasets. We tackled this by modifying generative adversarial networks (GANs) into conditional GANs (cGANs) that can generate diverse mocap data, including 15 marker trajectories, lower limb joint angles, and 3D ground reaction forces (GRFs), based on specified subject and gait characteristics. The cGAN comprised 1) an encoder compressing mocap data to a latent vector, 2) a decoder reconstructing the mocap data from the latent vector with specific conditions and 3) a discriminator distinguishing random vectors with conditions from encoded latent vectors with conditions. Single-conditional models were trained separately for age, sex, leg length, mass, and walking speed, while an additional model (Multi-cGAN) combined all conditions simultaneously to generate synthetic data. All models closely replicated the training dataset ( |
---|---|
ISSN: | 0021-9290 1873-2380 1873-2380 |
DOI: | 10.1016/j.jbiomech.2024.112358 |