Self-Supervised Motion Retargeting with Safety Guarantee
In this paper, we present self-supervised shared latent embedding (S3LE), a data-driven motion retargeting method that enables the generation of natural motions in humanoid robots from motion capture data or RGB videos. While it requires paired data consisting of human poses and their corresponding...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Choi, Sungjoon Song, Min Jae Ahn, Hyemin Kim, Joohyung |
description | In this paper, we present self-supervised shared latent embedding (S3LE), a
data-driven motion retargeting method that enables the generation of natural
motions in humanoid robots from motion capture data or RGB videos. While it
requires paired data consisting of human poses and their corresponding robot
configurations, it significantly alleviates the necessity of time-consuming
data-collection via novel paired data generating processes. Our self-supervised
learning procedure consists of two steps: automatically generating paired data
to bootstrap the motion retargeting, and learning a projection-invariant
mapping to handle the different expressivity of humans and humanoid robots.
Furthermore, our method guarantees that the generated robot pose is
collision-free and satisfies position limits by utilizing nonparametric
regression in the shared latent space. We demonstrate that our method can
generate expressive robotic motions from both the CMU motion capture database
and YouTube videos. |
doi_str_mv | 10.48550/arxiv.2103.06447 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2103_06447</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2103_06447</sourcerecordid><originalsourceid>FETCH-LOGICAL-a677-6898940d7f6998bf5ee57064e6320cd64a48dad657c7d99d9329deace6589c743</originalsourceid><addsrcrecordid>eNotj7tuAjEQRd1QRMAHpIp_YBezfk6JEC-JCClLv5qsx8QSLMgYEv4eQlKd7p57GHsdi1I5rcUI00-8ltVYyFIYpewLczXtQ1FfTpSu8Uyevx9zPHb8gzKmHeXY7fh3zF-8xkD5xhcXTNhlogHrBdyfafjPPtvOZ9vpslhvFqvpZF2gsbYwDhwo4W0wAO4zaCJtH2oyshKtNwqV8-iNtq31AB5kBZ6wJaMdtFbJPnv7m31eb04pHjDdmt-E5pkg75wcQNc</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Self-Supervised Motion Retargeting with Safety Guarantee</title><source>arXiv.org</source><creator>Choi, Sungjoon ; Song, Min Jae ; Ahn, Hyemin ; Kim, Joohyung</creator><creatorcontrib>Choi, Sungjoon ; Song, Min Jae ; Ahn, Hyemin ; Kim, Joohyung</creatorcontrib><description>In this paper, we present self-supervised shared latent embedding (S3LE), a
data-driven motion retargeting method that enables the generation of natural
motions in humanoid robots from motion capture data or RGB videos. While it
requires paired data consisting of human poses and their corresponding robot
configurations, it significantly alleviates the necessity of time-consuming
data-collection via novel paired data generating processes. Our self-supervised
learning procedure consists of two steps: automatically generating paired data
to bootstrap the motion retargeting, and learning a projection-invariant
mapping to handle the different expressivity of humans and humanoid robots.
Furthermore, our method guarantees that the generated robot pose is
collision-free and satisfies position limits by utilizing nonparametric
regression in the shared latent space. We demonstrate that our method can
generate expressive robotic motions from both the CMU motion capture database
and YouTube videos.</description><identifier>DOI: 10.48550/arxiv.2103.06447</identifier><language>eng</language><subject>Computer Science - Learning ; Computer Science - Robotics</subject><creationdate>2021-03</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2103.06447$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2103.06447$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Choi, Sungjoon</creatorcontrib><creatorcontrib>Song, Min Jae</creatorcontrib><creatorcontrib>Ahn, Hyemin</creatorcontrib><creatorcontrib>Kim, Joohyung</creatorcontrib><title>Self-Supervised Motion Retargeting with Safety Guarantee</title><description>In this paper, we present self-supervised shared latent embedding (S3LE), a
data-driven motion retargeting method that enables the generation of natural
motions in humanoid robots from motion capture data or RGB videos. While it
requires paired data consisting of human poses and their corresponding robot
configurations, it significantly alleviates the necessity of time-consuming
data-collection via novel paired data generating processes. Our self-supervised
learning procedure consists of two steps: automatically generating paired data
to bootstrap the motion retargeting, and learning a projection-invariant
mapping to handle the different expressivity of humans and humanoid robots.
Furthermore, our method guarantees that the generated robot pose is
collision-free and satisfies position limits by utilizing nonparametric
regression in the shared latent space. We demonstrate that our method can
generate expressive robotic motions from both the CMU motion capture database
and YouTube videos.</description><subject>Computer Science - Learning</subject><subject>Computer Science - Robotics</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj7tuAjEQRd1QRMAHpIp_YBezfk6JEC-JCClLv5qsx8QSLMgYEv4eQlKd7p57GHsdi1I5rcUI00-8ltVYyFIYpewLczXtQ1FfTpSu8Uyevx9zPHb8gzKmHeXY7fh3zF-8xkD5xhcXTNhlogHrBdyfafjPPtvOZ9vpslhvFqvpZF2gsbYwDhwo4W0wAO4zaCJtH2oyshKtNwqV8-iNtq31AB5kBZ6wJaMdtFbJPnv7m31eb04pHjDdmt-E5pkg75wcQNc</recordid><startdate>20210310</startdate><enddate>20210310</enddate><creator>Choi, Sungjoon</creator><creator>Song, Min Jae</creator><creator>Ahn, Hyemin</creator><creator>Kim, Joohyung</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20210310</creationdate><title>Self-Supervised Motion Retargeting with Safety Guarantee</title><author>Choi, Sungjoon ; Song, Min Jae ; Ahn, Hyemin ; Kim, Joohyung</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a677-6898940d7f6998bf5ee57064e6320cd64a48dad657c7d99d9329deace6589c743</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Computer Science - Learning</topic><topic>Computer Science - Robotics</topic><toplevel>online_resources</toplevel><creatorcontrib>Choi, Sungjoon</creatorcontrib><creatorcontrib>Song, Min Jae</creatorcontrib><creatorcontrib>Ahn, Hyemin</creatorcontrib><creatorcontrib>Kim, Joohyung</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Choi, Sungjoon</au><au>Song, Min Jae</au><au>Ahn, Hyemin</au><au>Kim, Joohyung</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Self-Supervised Motion Retargeting with Safety Guarantee</atitle><date>2021-03-10</date><risdate>2021</risdate><abstract>In this paper, we present self-supervised shared latent embedding (S3LE), a
data-driven motion retargeting method that enables the generation of natural
motions in humanoid robots from motion capture data or RGB videos. While it
requires paired data consisting of human poses and their corresponding robot
configurations, it significantly alleviates the necessity of time-consuming
data-collection via novel paired data generating processes. Our self-supervised
learning procedure consists of two steps: automatically generating paired data
to bootstrap the motion retargeting, and learning a projection-invariant
mapping to handle the different expressivity of humans and humanoid robots.
Furthermore, our method guarantees that the generated robot pose is
collision-free and satisfies position limits by utilizing nonparametric
regression in the shared latent space. We demonstrate that our method can
generate expressive robotic motions from both the CMU motion capture database
and YouTube videos.</abstract><doi>10.48550/arxiv.2103.06447</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2103.06447 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2103_06447 |
source | arXiv.org |
subjects | Computer Science - Learning Computer Science - Robotics |
title | Self-Supervised Motion Retargeting with Safety Guarantee |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-22T09%3A19%3A35IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Self-Supervised%20Motion%20Retargeting%20with%20Safety%20Guarantee&rft.au=Choi,%20Sungjoon&rft.date=2021-03-10&rft_id=info:doi/10.48550/arxiv.2103.06447&rft_dat=%3Carxiv_GOX%3E2103_06447%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |