Pose Representations for Deep Skeletal Animation

Data-driven character animation techniques rely on the existence of a properly established model of motion, capable of describing its rich context. However, commonly used motion representations often fail to accurately encode the full articulation of motion, or present artifacts. In this work, we ad...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2022-07
Hauptverfasser: Andreou, Nefeli, Aristidou, Andreas, Chrysanthou, Yiorgos
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Andreou, Nefeli
Aristidou, Andreas
Chrysanthou, Yiorgos
description Data-driven character animation techniques rely on the existence of a properly established model of motion, capable of describing its rich context. However, commonly used motion representations often fail to accurately encode the full articulation of motion, or present artifacts. In this work, we address the fundamental problem of finding a robust pose representation for motion modeling, suitable for deep character animation, one that can better constrain poses and faithfully capture nuances correlated with skeletal characteristics. Our representation is based on dual quaternions, the mathematical abstractions with well-defined operations, which simultaneously encode rotational and positional orientation, enabling a hierarchy-aware encoding, centered around the root. We demonstrate that our representation overcomes common motion artifacts, and assess its performance compared to other popular representations. We conduct an ablation study to evaluate the impact of various losses that can be incorporated during learning. Leveraging the fact that our representation implicitly encodes skeletal motion attributes, we train a network on a dataset comprising of skeletons with different proportions, without the need to retarget them first to a universal skeleton, which causes subtle motion elements to be missed. We show that smooth and natural poses can be achieved, paving the way for fascinating applications.
doi_str_mv 10.48550/arxiv.2111.13907
format Article
fullrecord <record><control><sourceid>proquest_arxiv</sourceid><recordid>TN_cdi_arxiv_primary_2111_13907</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2604670665</sourcerecordid><originalsourceid>FETCH-LOGICAL-a957-cd355b8634ee22bfca3e1ca96a92ab73cbd2374f0cbb05bf27e6746ab17584d03</originalsourceid><addsrcrecordid>eNotj01Lw0AYhBdBsNT-AE8GPCe---5Xciz1o0JB0d7DbvIGUmM27qai_96YeprDDDPzMHbFIZO5UnBrw3f7lSHnPOOiAHPGFigET3OJeMFWMR4AALVBpcSCwYuPlLzSEChSP9qx9X1MGh-SO6IheXunjkbbJeu-_ZjNS3be2C7S6l-XbP9wv99s093z49NmvUttoUxa1UIpl2shiRBdU1lBvLKFtgVaZ0TlahRGNlA5B8o1aEgbqa3jRuWyBrFk16faGaccwjQffso_rHLGmhI3p8QQ_OeR4lge_DH006cSNUhtQGslfgEX-09Z</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2604670665</pqid></control><display><type>article</type><title>Pose Representations for Deep Skeletal Animation</title><source>arXiv.org</source><source>Free E- Journals</source><creator>Andreou, Nefeli ; Aristidou, Andreas ; Chrysanthou, Yiorgos</creator><creatorcontrib>Andreou, Nefeli ; Aristidou, Andreas ; Chrysanthou, Yiorgos</creatorcontrib><description>Data-driven character animation techniques rely on the existence of a properly established model of motion, capable of describing its rich context. However, commonly used motion representations often fail to accurately encode the full articulation of motion, or present artifacts. In this work, we address the fundamental problem of finding a robust pose representation for motion modeling, suitable for deep character animation, one that can better constrain poses and faithfully capture nuances correlated with skeletal characteristics. Our representation is based on dual quaternions, the mathematical abstractions with well-defined operations, which simultaneously encode rotational and positional orientation, enabling a hierarchy-aware encoding, centered around the root. We demonstrate that our representation overcomes common motion artifacts, and assess its performance compared to other popular representations. We conduct an ablation study to evaluate the impact of various losses that can be incorporated during learning. Leveraging the fact that our representation implicitly encodes skeletal motion attributes, we train a network on a dataset comprising of skeletons with different proportions, without the need to retarget them first to a universal skeleton, which causes subtle motion elements to be missed. We show that smooth and natural poses can be achieved, paving the way for fascinating applications.</description><identifier>EISSN: 2331-8422</identifier><identifier>DOI: 10.48550/arxiv.2111.13907</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Ablation ; Animation ; Computer Science - Computer Vision and Pattern Recognition ; Computer Science - Graphics ; Quaternions ; Representations ; Robustness (mathematics)</subject><ispartof>arXiv.org, 2022-07</ispartof><rights>2022. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,778,782,883,27912</link.rule.ids><backlink>$$Uhttps://doi.org/10.48550/arXiv.2111.13907$$DView paper in arXiv$$Hfree_for_read</backlink><backlink>$$Uhttps://doi.org/10.1111/cgf.14632$$DView published paper (Access to full text may be restricted)$$Hfree_for_read</backlink></links><search><creatorcontrib>Andreou, Nefeli</creatorcontrib><creatorcontrib>Aristidou, Andreas</creatorcontrib><creatorcontrib>Chrysanthou, Yiorgos</creatorcontrib><title>Pose Representations for Deep Skeletal Animation</title><title>arXiv.org</title><description>Data-driven character animation techniques rely on the existence of a properly established model of motion, capable of describing its rich context. However, commonly used motion representations often fail to accurately encode the full articulation of motion, or present artifacts. In this work, we address the fundamental problem of finding a robust pose representation for motion modeling, suitable for deep character animation, one that can better constrain poses and faithfully capture nuances correlated with skeletal characteristics. Our representation is based on dual quaternions, the mathematical abstractions with well-defined operations, which simultaneously encode rotational and positional orientation, enabling a hierarchy-aware encoding, centered around the root. We demonstrate that our representation overcomes common motion artifacts, and assess its performance compared to other popular representations. We conduct an ablation study to evaluate the impact of various losses that can be incorporated during learning. Leveraging the fact that our representation implicitly encodes skeletal motion attributes, we train a network on a dataset comprising of skeletons with different proportions, without the need to retarget them first to a universal skeleton, which causes subtle motion elements to be missed. We show that smooth and natural poses can be achieved, paving the way for fascinating applications.</description><subject>Ablation</subject><subject>Animation</subject><subject>Computer Science - Computer Vision and Pattern Recognition</subject><subject>Computer Science - Graphics</subject><subject>Quaternions</subject><subject>Representations</subject><subject>Robustness (mathematics)</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GOX</sourceid><recordid>eNotj01Lw0AYhBdBsNT-AE8GPCe---5Xciz1o0JB0d7DbvIGUmM27qai_96YeprDDDPzMHbFIZO5UnBrw3f7lSHnPOOiAHPGFigET3OJeMFWMR4AALVBpcSCwYuPlLzSEChSP9qx9X1MGh-SO6IheXunjkbbJeu-_ZjNS3be2C7S6l-XbP9wv99s093z49NmvUttoUxa1UIpl2shiRBdU1lBvLKFtgVaZ0TlahRGNlA5B8o1aEgbqa3jRuWyBrFk16faGaccwjQffso_rHLGmhI3p8QQ_OeR4lge_DH006cSNUhtQGslfgEX-09Z</recordid><startdate>20220727</startdate><enddate>20220727</enddate><creator>Andreou, Nefeli</creator><creator>Aristidou, Andreas</creator><creator>Chrysanthou, Yiorgos</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20220727</creationdate><title>Pose Representations for Deep Skeletal Animation</title><author>Andreou, Nefeli ; Aristidou, Andreas ; Chrysanthou, Yiorgos</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a957-cd355b8634ee22bfca3e1ca96a92ab73cbd2374f0cbb05bf27e6746ab17584d03</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Ablation</topic><topic>Animation</topic><topic>Computer Science - Computer Vision and Pattern Recognition</topic><topic>Computer Science - Graphics</topic><topic>Quaternions</topic><topic>Representations</topic><topic>Robustness (mathematics)</topic><toplevel>online_resources</toplevel><creatorcontrib>Andreou, Nefeli</creatorcontrib><creatorcontrib>Aristidou, Andreas</creatorcontrib><creatorcontrib>Chrysanthou, Yiorgos</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection><collection>arXiv Computer Science</collection><collection>arXiv.org</collection><jtitle>arXiv.org</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Andreou, Nefeli</au><au>Aristidou, Andreas</au><au>Chrysanthou, Yiorgos</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Pose Representations for Deep Skeletal Animation</atitle><jtitle>arXiv.org</jtitle><date>2022-07-27</date><risdate>2022</risdate><eissn>2331-8422</eissn><abstract>Data-driven character animation techniques rely on the existence of a properly established model of motion, capable of describing its rich context. However, commonly used motion representations often fail to accurately encode the full articulation of motion, or present artifacts. In this work, we address the fundamental problem of finding a robust pose representation for motion modeling, suitable for deep character animation, one that can better constrain poses and faithfully capture nuances correlated with skeletal characteristics. Our representation is based on dual quaternions, the mathematical abstractions with well-defined operations, which simultaneously encode rotational and positional orientation, enabling a hierarchy-aware encoding, centered around the root. We demonstrate that our representation overcomes common motion artifacts, and assess its performance compared to other popular representations. We conduct an ablation study to evaluate the impact of various losses that can be incorporated during learning. Leveraging the fact that our representation implicitly encodes skeletal motion attributes, we train a network on a dataset comprising of skeletons with different proportions, without the need to retarget them first to a universal skeleton, which causes subtle motion elements to be missed. We show that smooth and natural poses can be achieved, paving the way for fascinating applications.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><doi>10.48550/arxiv.2111.13907</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2022-07
issn 2331-8422
language eng
recordid cdi_arxiv_primary_2111_13907
source arXiv.org; Free E- Journals
subjects Ablation
Animation
Computer Science - Computer Vision and Pattern Recognition
Computer Science - Graphics
Quaternions
Representations
Robustness (mathematics)
title Pose Representations for Deep Skeletal Animation
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-15T20%3A46%3A37IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_arxiv&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Pose%20Representations%20for%20Deep%20Skeletal%20Animation&rft.jtitle=arXiv.org&rft.au=Andreou,%20Nefeli&rft.date=2022-07-27&rft.eissn=2331-8422&rft_id=info:doi/10.48550/arxiv.2111.13907&rft_dat=%3Cproquest_arxiv%3E2604670665%3C/proquest_arxiv%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2604670665&rft_id=info:pmid/&rfr_iscdi=true