Learning Whole-Body Manipulation for Quadrupedal Robot

We propose a learning-based system for enabling quadrupedal robots to manipulate large, heavy objects using their whole body. Our system is based on a hierarchical control strategy that uses the deep latent variable embedding which captures manipulation-relevant information from interactions, propri...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE robotics and automation letters 2024-01, Vol.9 (1), p.699-706
Hauptverfasser: Jeon, Seunghun, Jung, Moonkyu, Choi, Suyoung, Kim, Beomjoon, Hwangbo, Jemin
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 706
container_issue 1
container_start_page 699
container_title IEEE robotics and automation letters
container_volume 9
creator Jeon, Seunghun
Jung, Moonkyu
Choi, Suyoung
Kim, Beomjoon
Hwangbo, Jemin
description We propose a learning-based system for enabling quadrupedal robots to manipulate large, heavy objects using their whole body. Our system is based on a hierarchical control strategy that uses the deep latent variable embedding which captures manipulation-relevant information from interactions, proprioception, and action history, allowing the robot to implicitly understand object properties. We evaluate our framework in both simulation and real-world scenarios. In the simulation, it achieves a success rate of 93.6\% in accurately re-positioning and re-orienting various objects within a tolerance of 0.03 \text{m} and 5^\circ. Real-world experiments demonstrate the successful manipulation of objects such as a 19.2 \text{kg} water-filled drum and a 15.3 \text{kg} plastic box filled with heavy objects while the robot weighs 27 \text{kg}. Unlike previous works that focus on manipulating small and light objects using prehensile manipulation, our framework illustrates the possibility of using quadrupeds for manipulating large and heavy objects that are ungraspable with the robot's entire body. Our method does not require explicit object modeling and offers significant computational efficiency compared to optimization-based methods.
doi_str_mv 10.1109/LRA.2023.3335777
format Article
fullrecord <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_proquest_journals_2898818632</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10325606</ieee_id><sourcerecordid>2898818632</sourcerecordid><originalsourceid>FETCH-LOGICAL-c245t-d8d52d12f933ae060d6da61b2527513b28fab4cf38795fe9a6e5a86836b3e2cd3</originalsourceid><addsrcrecordid>eNpNkDtPwzAUhS0EElXpzsAQiTnF9q0fGUvFSwpCVCBGy4mvIVWIg5MM_fekSodO9w7fOUf6CLlmdMkYze7y7XrJKYclAAil1BmZcVAqBSXl-cl_SRZdt6OUMsEVZGJGZI42NlXznXz9hBrT--D2yattqnaobV-FJvEhJu-DdXFo0dk62YYi9Ffkwtu6w8Xxzsnn48PH5jnN355eNus8LflK9KnTTnDHuM8ALFJJnXRWsoKP84JBwbW3xar0oFUmPGZWorBaapAFIC8dzMnt1NvG8Ddg15tdGGIzThquM62ZlsBHik5UGUPXRfSmjdWvjXvDqDkIMqMgcxBkjoLGyM0UqRDxBAcuJJXwD2ksYDI</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2898818632</pqid></control><display><type>article</type><title>Learning Whole-Body Manipulation for Quadrupedal Robot</title><source>IEEE Electronic Library (IEL)</source><creator>Jeon, Seunghun ; Jung, Moonkyu ; Choi, Suyoung ; Kim, Beomjoon ; Hwangbo, Jemin</creator><creatorcontrib>Jeon, Seunghun ; Jung, Moonkyu ; Choi, Suyoung ; Kim, Beomjoon ; Hwangbo, Jemin</creatorcontrib><description><![CDATA[We propose a learning-based system for enabling quadrupedal robots to manipulate large, heavy objects using their whole body. Our system is based on a hierarchical control strategy that uses the deep latent variable embedding which captures manipulation-relevant information from interactions, proprioception, and action history, allowing the robot to implicitly understand object properties. We evaluate our framework in both simulation and real-world scenarios. In the simulation, it achieves a success rate of 93.6<inline-formula><tex-math notation="LaTeX">\%</tex-math></inline-formula> in accurately re-positioning and re-orienting various objects within a tolerance of 0.03 <inline-formula><tex-math notation="LaTeX">\text{m}</tex-math></inline-formula> and 5<inline-formula><tex-math notation="LaTeX">^\circ</tex-math></inline-formula>. Real-world experiments demonstrate the successful manipulation of objects such as a 19.2 <inline-formula><tex-math notation="LaTeX">\text{kg}</tex-math></inline-formula> water-filled drum and a 15.3 <inline-formula><tex-math notation="LaTeX">\text{kg}</tex-math></inline-formula> plastic box filled with heavy objects while the robot weighs 27 <inline-formula><tex-math notation="LaTeX">\text{kg}</tex-math></inline-formula>. Unlike previous works that focus on manipulating small and light objects using prehensile manipulation, our framework illustrates the possibility of using quadrupeds for manipulating large and heavy objects that are ungraspable with the robot's entire body. Our method does not require explicit object modeling and offers significant computational efficiency compared to optimization-based methods.]]></description><identifier>ISSN: 2377-3766</identifier><identifier>EISSN: 2377-3766</identifier><identifier>DOI: 10.1109/LRA.2023.3335777</identifier><identifier>CODEN: IRALC6</identifier><language>eng</language><publisher>Piscataway: IEEE</publisher><subject>Deep learning ; Deep learning methods ; Force sensors ; Learning ; Legged locomotion ; legged robots ; Quadrupedal robots ; Real-time systems ; Reinforcement learning ; Robot sensing systems ; Robots ; Task analysis ; Training</subject><ispartof>IEEE robotics and automation letters, 2024-01, Vol.9 (1), p.699-706</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2024</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c245t-d8d52d12f933ae060d6da61b2527513b28fab4cf38795fe9a6e5a86836b3e2cd3</cites><orcidid>0000-0002-8888-7253 ; 0000-0002-7254-6753 ; 0009-0003-7983-818X ; 0000-0002-3444-8079 ; 0000-0002-3116-1072</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10325606$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,776,780,792,27903,27904,54736</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/10325606$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Jeon, Seunghun</creatorcontrib><creatorcontrib>Jung, Moonkyu</creatorcontrib><creatorcontrib>Choi, Suyoung</creatorcontrib><creatorcontrib>Kim, Beomjoon</creatorcontrib><creatorcontrib>Hwangbo, Jemin</creatorcontrib><title>Learning Whole-Body Manipulation for Quadrupedal Robot</title><title>IEEE robotics and automation letters</title><addtitle>LRA</addtitle><description><![CDATA[We propose a learning-based system for enabling quadrupedal robots to manipulate large, heavy objects using their whole body. Our system is based on a hierarchical control strategy that uses the deep latent variable embedding which captures manipulation-relevant information from interactions, proprioception, and action history, allowing the robot to implicitly understand object properties. We evaluate our framework in both simulation and real-world scenarios. In the simulation, it achieves a success rate of 93.6<inline-formula><tex-math notation="LaTeX">\%</tex-math></inline-formula> in accurately re-positioning and re-orienting various objects within a tolerance of 0.03 <inline-formula><tex-math notation="LaTeX">\text{m}</tex-math></inline-formula> and 5<inline-formula><tex-math notation="LaTeX">^\circ</tex-math></inline-formula>. Real-world experiments demonstrate the successful manipulation of objects such as a 19.2 <inline-formula><tex-math notation="LaTeX">\text{kg}</tex-math></inline-formula> water-filled drum and a 15.3 <inline-formula><tex-math notation="LaTeX">\text{kg}</tex-math></inline-formula> plastic box filled with heavy objects while the robot weighs 27 <inline-formula><tex-math notation="LaTeX">\text{kg}</tex-math></inline-formula>. Unlike previous works that focus on manipulating small and light objects using prehensile manipulation, our framework illustrates the possibility of using quadrupeds for manipulating large and heavy objects that are ungraspable with the robot's entire body. Our method does not require explicit object modeling and offers significant computational efficiency compared to optimization-based methods.]]></description><subject>Deep learning</subject><subject>Deep learning methods</subject><subject>Force sensors</subject><subject>Learning</subject><subject>Legged locomotion</subject><subject>legged robots</subject><subject>Quadrupedal robots</subject><subject>Real-time systems</subject><subject>Reinforcement learning</subject><subject>Robot sensing systems</subject><subject>Robots</subject><subject>Task analysis</subject><subject>Training</subject><issn>2377-3766</issn><issn>2377-3766</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNpNkDtPwzAUhS0EElXpzsAQiTnF9q0fGUvFSwpCVCBGy4mvIVWIg5MM_fekSodO9w7fOUf6CLlmdMkYze7y7XrJKYclAAil1BmZcVAqBSXl-cl_SRZdt6OUMsEVZGJGZI42NlXznXz9hBrT--D2yattqnaobV-FJvEhJu-DdXFo0dk62YYi9Ffkwtu6w8Xxzsnn48PH5jnN355eNus8LflK9KnTTnDHuM8ALFJJnXRWsoKP84JBwbW3xar0oFUmPGZWorBaapAFIC8dzMnt1NvG8Ddg15tdGGIzThquM62ZlsBHik5UGUPXRfSmjdWvjXvDqDkIMqMgcxBkjoLGyM0UqRDxBAcuJJXwD2ksYDI</recordid><startdate>202401</startdate><enddate>202401</enddate><creator>Jeon, Seunghun</creator><creator>Jung, Moonkyu</creator><creator>Choi, Suyoung</creator><creator>Kim, Beomjoon</creator><creator>Hwangbo, Jemin</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><orcidid>https://orcid.org/0000-0002-8888-7253</orcidid><orcidid>https://orcid.org/0000-0002-7254-6753</orcidid><orcidid>https://orcid.org/0009-0003-7983-818X</orcidid><orcidid>https://orcid.org/0000-0002-3444-8079</orcidid><orcidid>https://orcid.org/0000-0002-3116-1072</orcidid></search><sort><creationdate>202401</creationdate><title>Learning Whole-Body Manipulation for Quadrupedal Robot</title><author>Jeon, Seunghun ; Jung, Moonkyu ; Choi, Suyoung ; Kim, Beomjoon ; Hwangbo, Jemin</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c245t-d8d52d12f933ae060d6da61b2527513b28fab4cf38795fe9a6e5a86836b3e2cd3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Deep learning</topic><topic>Deep learning methods</topic><topic>Force sensors</topic><topic>Learning</topic><topic>Legged locomotion</topic><topic>legged robots</topic><topic>Quadrupedal robots</topic><topic>Real-time systems</topic><topic>Reinforcement learning</topic><topic>Robot sensing systems</topic><topic>Robots</topic><topic>Task analysis</topic><topic>Training</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Jeon, Seunghun</creatorcontrib><creatorcontrib>Jung, Moonkyu</creatorcontrib><creatorcontrib>Choi, Suyoung</creatorcontrib><creatorcontrib>Kim, Beomjoon</creatorcontrib><creatorcontrib>Hwangbo, Jemin</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>IEEE robotics and automation letters</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Jeon, Seunghun</au><au>Jung, Moonkyu</au><au>Choi, Suyoung</au><au>Kim, Beomjoon</au><au>Hwangbo, Jemin</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Learning Whole-Body Manipulation for Quadrupedal Robot</atitle><jtitle>IEEE robotics and automation letters</jtitle><stitle>LRA</stitle><date>2024-01</date><risdate>2024</risdate><volume>9</volume><issue>1</issue><spage>699</spage><epage>706</epage><pages>699-706</pages><issn>2377-3766</issn><eissn>2377-3766</eissn><coden>IRALC6</coden><abstract><![CDATA[We propose a learning-based system for enabling quadrupedal robots to manipulate large, heavy objects using their whole body. Our system is based on a hierarchical control strategy that uses the deep latent variable embedding which captures manipulation-relevant information from interactions, proprioception, and action history, allowing the robot to implicitly understand object properties. We evaluate our framework in both simulation and real-world scenarios. In the simulation, it achieves a success rate of 93.6<inline-formula><tex-math notation="LaTeX">\%</tex-math></inline-formula> in accurately re-positioning and re-orienting various objects within a tolerance of 0.03 <inline-formula><tex-math notation="LaTeX">\text{m}</tex-math></inline-formula> and 5<inline-formula><tex-math notation="LaTeX">^\circ</tex-math></inline-formula>. Real-world experiments demonstrate the successful manipulation of objects such as a 19.2 <inline-formula><tex-math notation="LaTeX">\text{kg}</tex-math></inline-formula> water-filled drum and a 15.3 <inline-formula><tex-math notation="LaTeX">\text{kg}</tex-math></inline-formula> plastic box filled with heavy objects while the robot weighs 27 <inline-formula><tex-math notation="LaTeX">\text{kg}</tex-math></inline-formula>. Unlike previous works that focus on manipulating small and light objects using prehensile manipulation, our framework illustrates the possibility of using quadrupeds for manipulating large and heavy objects that are ungraspable with the robot's entire body. Our method does not require explicit object modeling and offers significant computational efficiency compared to optimization-based methods.]]></abstract><cop>Piscataway</cop><pub>IEEE</pub><doi>10.1109/LRA.2023.3335777</doi><tpages>8</tpages><orcidid>https://orcid.org/0000-0002-8888-7253</orcidid><orcidid>https://orcid.org/0000-0002-7254-6753</orcidid><orcidid>https://orcid.org/0009-0003-7983-818X</orcidid><orcidid>https://orcid.org/0000-0002-3444-8079</orcidid><orcidid>https://orcid.org/0000-0002-3116-1072</orcidid></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 2377-3766
ispartof IEEE robotics and automation letters, 2024-01, Vol.9 (1), p.699-706
issn 2377-3766
2377-3766
language eng
recordid cdi_proquest_journals_2898818632
source IEEE Electronic Library (IEL)
subjects Deep learning
Deep learning methods
Force sensors
Learning
Legged locomotion
legged robots
Quadrupedal robots
Real-time systems
Reinforcement learning
Robot sensing systems
Robots
Task analysis
Training
title Learning Whole-Body Manipulation for Quadrupedal Robot
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-26T03%3A04%3A13IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Learning%20Whole-Body%20Manipulation%20for%20Quadrupedal%20Robot&rft.jtitle=IEEE%20robotics%20and%20automation%20letters&rft.au=Jeon,%20Seunghun&rft.date=2024-01&rft.volume=9&rft.issue=1&rft.spage=699&rft.epage=706&rft.pages=699-706&rft.issn=2377-3766&rft.eissn=2377-3766&rft.coden=IRALC6&rft_id=info:doi/10.1109/LRA.2023.3335777&rft_dat=%3Cproquest_RIE%3E2898818632%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2898818632&rft_id=info:pmid/&rft_ieee_id=10325606&rfr_iscdi=true