Humanoid Parkour Learning

Parkour is a grand challenge for legged locomotion, even for quadruped robots, requiring active perception and various maneuvers to overcome multiple challenging obstacles. Existing methods for humanoid locomotion either optimize a trajectory for a single parkour track or train a reinforcement learn...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Zhuang, Ziwen, Yao, Shenzhe, Zhao, Hang
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Zhuang, Ziwen
Yao, Shenzhe
Zhao, Hang
description Parkour is a grand challenge for legged locomotion, even for quadruped robots, requiring active perception and various maneuvers to overcome multiple challenging obstacles. Existing methods for humanoid locomotion either optimize a trajectory for a single parkour track or train a reinforcement learning policy only to walk with a significant amount of motion references. In this work, we propose a framework for learning an end-to-end vision-based whole-body-control parkour policy for humanoid robots that overcomes multiple parkour skills without any motion prior. Using the parkour policy, the humanoid robot can jump on a 0.42m platform, leap over hurdles, 0.8m gaps, and much more. It can also run at 1.8m/s in the wild and walk robustly on different terrains. We test our policy in indoor and outdoor environments to demonstrate that it can autonomously select parkour skills while following the rotation command of the joystick. We override the arm actions and show that this framework can easily transfer to humanoid mobile manipulation tasks. Videos can be found at https://humanoid4parkour.github.io
doi_str_mv 10.48550/arxiv.2406.10759
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2406_10759</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2406_10759</sourcerecordid><originalsourceid>FETCH-LOGICAL-a679-39cbfc61e808a5ef154fe27428c3b663f5aebade5803c0dac615bc8fb19f6d723</originalsourceid><addsrcrecordid>eNotzr0OgjAYheEuDga9ACa5AbClP7SjISomJDqwk6-lNUQBU4PRu1fR6SxvTh6EQoITJjnHa_DP9pGkDIuE4IyrOQqLsYN-aJvoBP4yjD4qLfi-7c8LNHNwvdvlfwNU7bZVXsTlcX_IN2UMIlMxVUY7I4iVWAK3jnDmbJqxVBqqhaCOg9XQWC4xNbiBT8q1kU4T5USTpTRAq9_tZKtvvu3Av-qvsZ6M9A330zYm</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Humanoid Parkour Learning</title><source>arXiv.org</source><creator>Zhuang, Ziwen ; Yao, Shenzhe ; Zhao, Hang</creator><creatorcontrib>Zhuang, Ziwen ; Yao, Shenzhe ; Zhao, Hang</creatorcontrib><description>Parkour is a grand challenge for legged locomotion, even for quadruped robots, requiring active perception and various maneuvers to overcome multiple challenging obstacles. Existing methods for humanoid locomotion either optimize a trajectory for a single parkour track or train a reinforcement learning policy only to walk with a significant amount of motion references. In this work, we propose a framework for learning an end-to-end vision-based whole-body-control parkour policy for humanoid robots that overcomes multiple parkour skills without any motion prior. Using the parkour policy, the humanoid robot can jump on a 0.42m platform, leap over hurdles, 0.8m gaps, and much more. It can also run at 1.8m/s in the wild and walk robustly on different terrains. We test our policy in indoor and outdoor environments to demonstrate that it can autonomously select parkour skills while following the rotation command of the joystick. We override the arm actions and show that this framework can easily transfer to humanoid mobile manipulation tasks. Videos can be found at https://humanoid4parkour.github.io</description><identifier>DOI: 10.48550/arxiv.2406.10759</identifier><language>eng</language><subject>Computer Science - Robotics</subject><creationdate>2024-06</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2406.10759$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2406.10759$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Zhuang, Ziwen</creatorcontrib><creatorcontrib>Yao, Shenzhe</creatorcontrib><creatorcontrib>Zhao, Hang</creatorcontrib><title>Humanoid Parkour Learning</title><description>Parkour is a grand challenge for legged locomotion, even for quadruped robots, requiring active perception and various maneuvers to overcome multiple challenging obstacles. Existing methods for humanoid locomotion either optimize a trajectory for a single parkour track or train a reinforcement learning policy only to walk with a significant amount of motion references. In this work, we propose a framework for learning an end-to-end vision-based whole-body-control parkour policy for humanoid robots that overcomes multiple parkour skills without any motion prior. Using the parkour policy, the humanoid robot can jump on a 0.42m platform, leap over hurdles, 0.8m gaps, and much more. It can also run at 1.8m/s in the wild and walk robustly on different terrains. We test our policy in indoor and outdoor environments to demonstrate that it can autonomously select parkour skills while following the rotation command of the joystick. We override the arm actions and show that this framework can easily transfer to humanoid mobile manipulation tasks. Videos can be found at https://humanoid4parkour.github.io</description><subject>Computer Science - Robotics</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotzr0OgjAYheEuDga9ACa5AbClP7SjISomJDqwk6-lNUQBU4PRu1fR6SxvTh6EQoITJjnHa_DP9pGkDIuE4IyrOQqLsYN-aJvoBP4yjD4qLfi-7c8LNHNwvdvlfwNU7bZVXsTlcX_IN2UMIlMxVUY7I4iVWAK3jnDmbJqxVBqqhaCOg9XQWC4xNbiBT8q1kU4T5USTpTRAq9_tZKtvvu3Av-qvsZ6M9A330zYm</recordid><startdate>20240615</startdate><enddate>20240615</enddate><creator>Zhuang, Ziwen</creator><creator>Yao, Shenzhe</creator><creator>Zhao, Hang</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240615</creationdate><title>Humanoid Parkour Learning</title><author>Zhuang, Ziwen ; Yao, Shenzhe ; Zhao, Hang</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a679-39cbfc61e808a5ef154fe27428c3b663f5aebade5803c0dac615bc8fb19f6d723</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Robotics</topic><toplevel>online_resources</toplevel><creatorcontrib>Zhuang, Ziwen</creatorcontrib><creatorcontrib>Yao, Shenzhe</creatorcontrib><creatorcontrib>Zhao, Hang</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Zhuang, Ziwen</au><au>Yao, Shenzhe</au><au>Zhao, Hang</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Humanoid Parkour Learning</atitle><date>2024-06-15</date><risdate>2024</risdate><abstract>Parkour is a grand challenge for legged locomotion, even for quadruped robots, requiring active perception and various maneuvers to overcome multiple challenging obstacles. Existing methods for humanoid locomotion either optimize a trajectory for a single parkour track or train a reinforcement learning policy only to walk with a significant amount of motion references. In this work, we propose a framework for learning an end-to-end vision-based whole-body-control parkour policy for humanoid robots that overcomes multiple parkour skills without any motion prior. Using the parkour policy, the humanoid robot can jump on a 0.42m platform, leap over hurdles, 0.8m gaps, and much more. It can also run at 1.8m/s in the wild and walk robustly on different terrains. We test our policy in indoor and outdoor environments to demonstrate that it can autonomously select parkour skills while following the rotation command of the joystick. We override the arm actions and show that this framework can easily transfer to humanoid mobile manipulation tasks. Videos can be found at https://humanoid4parkour.github.io</abstract><doi>10.48550/arxiv.2406.10759</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2406.10759
ispartof
issn
language eng
recordid cdi_arxiv_primary_2406_10759
source arXiv.org
subjects Computer Science - Robotics
title Humanoid Parkour Learning
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-06T10%3A50%3A04IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Humanoid%20Parkour%20Learning&rft.au=Zhuang,%20Ziwen&rft.date=2024-06-15&rft_id=info:doi/10.48550/arxiv.2406.10759&rft_dat=%3Carxiv_GOX%3E2406_10759%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true