DeformerNet: Learning Bimanual Manipulation of 3D Deformable Objects

Applications in fields ranging from home care to warehouse fulfillment to surgical assistance require robots to reliably manipulate the shape of 3D deformable objects. Analytic models of elastic, 3D deformable objects require numerous parameters to describe the potentially infinite degrees of freedo...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2024-02
Hauptverfasser: Thach, Bao, Cho, Brian Y, Shing-Hei Ho, Hermans, Tucker, Kuntz, Alan
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Thach, Bao
Cho, Brian Y
Shing-Hei Ho
Hermans, Tucker
Kuntz, Alan
description Applications in fields ranging from home care to warehouse fulfillment to surgical assistance require robots to reliably manipulate the shape of 3D deformable objects. Analytic models of elastic, 3D deformable objects require numerous parameters to describe the potentially infinite degrees of freedom present in determining the object's shape. Previous attempts at performing 3D shape control rely on hand-crafted features to represent the object shape and require training of object-specific control models. We overcome these issues through the use of our novel DeformerNet neural network architecture, which operates on a partial-view point cloud of the manipulated object and a point cloud of the goal shape to learn a low-dimensional representation of the object shape. This shape embedding enables the robot to learn a visual servo controller that computes the desired robot end-effector action to iteratively deform the object toward the target shape. We demonstrate both in simulation and on a physical robot that DeformerNet reliably generalizes to object shapes and material stiffness not seen during training, including ex vivo chicken muscle tissue. Crucially, using DeformerNet, the robot successfully accomplishes three surgical sub-tasks: retraction (moving tissue aside to access a site underneath it), tissue wrapping (a sub-task in procedures like aortic stent placements), and connecting two tubular pieces of tissue (a sub-task in anastomosis).
format Article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2811357730</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2811357730</sourcerecordid><originalsourceid>FETCH-proquest_journals_28113577303</originalsourceid><addsrcrecordid>eNqNi0sKwjAUAIMgWLR3CLgupHnWFpdaxYWfTfflVV4lJU1qPvdX0AO4msXMzFgiAfKs2ki5YKn3gxBCbktZFJCwuqbeupHcjcKOXwidUebJ92pEE1HzKxo1RY1BWcNtz6Hm3wM7TfzeDfQIfsXmPWpP6Y9Ltj4dm8M5m5x9RfKhHWx05qNaWeU5FGUJAv6r3uzMOdk</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2811357730</pqid></control><display><type>article</type><title>DeformerNet: Learning Bimanual Manipulation of 3D Deformable Objects</title><source>Free E- Journals</source><creator>Thach, Bao ; Cho, Brian Y ; Shing-Hei Ho ; Hermans, Tucker ; Kuntz, Alan</creator><creatorcontrib>Thach, Bao ; Cho, Brian Y ; Shing-Hei Ho ; Hermans, Tucker ; Kuntz, Alan</creatorcontrib><description>Applications in fields ranging from home care to warehouse fulfillment to surgical assistance require robots to reliably manipulate the shape of 3D deformable objects. Analytic models of elastic, 3D deformable objects require numerous parameters to describe the potentially infinite degrees of freedom present in determining the object's shape. Previous attempts at performing 3D shape control rely on hand-crafted features to represent the object shape and require training of object-specific control models. We overcome these issues through the use of our novel DeformerNet neural network architecture, which operates on a partial-view point cloud of the manipulated object and a point cloud of the goal shape to learn a low-dimensional representation of the object shape. This shape embedding enables the robot to learn a visual servo controller that computes the desired robot end-effector action to iteratively deform the object toward the target shape. We demonstrate both in simulation and on a physical robot that DeformerNet reliably generalizes to object shapes and material stiffness not seen during training, including ex vivo chicken muscle tissue. Crucially, using DeformerNet, the robot successfully accomplishes three surgical sub-tasks: retraction (moving tissue aside to access a site underneath it), tissue wrapping (a sub-task in procedures like aortic stent placements), and connecting two tubular pieces of tissue (a sub-task in anastomosis).</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Aorta ; Cloud computing ; Computer architecture ; Deformation effects ; Elastic deformation ; End effectors ; Formability ; Mathematical models ; Neural networks ; Robots ; Servocontrol ; Shape control ; Stiffness ; Training</subject><ispartof>arXiv.org, 2024-02</ispartof><rights>2024. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>776,780</link.rule.ids></links><search><creatorcontrib>Thach, Bao</creatorcontrib><creatorcontrib>Cho, Brian Y</creatorcontrib><creatorcontrib>Shing-Hei Ho</creatorcontrib><creatorcontrib>Hermans, Tucker</creatorcontrib><creatorcontrib>Kuntz, Alan</creatorcontrib><title>DeformerNet: Learning Bimanual Manipulation of 3D Deformable Objects</title><title>arXiv.org</title><description>Applications in fields ranging from home care to warehouse fulfillment to surgical assistance require robots to reliably manipulate the shape of 3D deformable objects. Analytic models of elastic, 3D deformable objects require numerous parameters to describe the potentially infinite degrees of freedom present in determining the object's shape. Previous attempts at performing 3D shape control rely on hand-crafted features to represent the object shape and require training of object-specific control models. We overcome these issues through the use of our novel DeformerNet neural network architecture, which operates on a partial-view point cloud of the manipulated object and a point cloud of the goal shape to learn a low-dimensional representation of the object shape. This shape embedding enables the robot to learn a visual servo controller that computes the desired robot end-effector action to iteratively deform the object toward the target shape. We demonstrate both in simulation and on a physical robot that DeformerNet reliably generalizes to object shapes and material stiffness not seen during training, including ex vivo chicken muscle tissue. Crucially, using DeformerNet, the robot successfully accomplishes three surgical sub-tasks: retraction (moving tissue aside to access a site underneath it), tissue wrapping (a sub-task in procedures like aortic stent placements), and connecting two tubular pieces of tissue (a sub-task in anastomosis).</description><subject>Aorta</subject><subject>Cloud computing</subject><subject>Computer architecture</subject><subject>Deformation effects</subject><subject>Elastic deformation</subject><subject>End effectors</subject><subject>Formability</subject><subject>Mathematical models</subject><subject>Neural networks</subject><subject>Robots</subject><subject>Servocontrol</subject><subject>Shape control</subject><subject>Stiffness</subject><subject>Training</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>BENPR</sourceid><recordid>eNqNi0sKwjAUAIMgWLR3CLgupHnWFpdaxYWfTfflVV4lJU1qPvdX0AO4msXMzFgiAfKs2ki5YKn3gxBCbktZFJCwuqbeupHcjcKOXwidUebJ92pEE1HzKxo1RY1BWcNtz6Hm3wM7TfzeDfQIfsXmPWpP6Y9Ltj4dm8M5m5x9RfKhHWx05qNaWeU5FGUJAv6r3uzMOdk</recordid><startdate>20240219</startdate><enddate>20240219</enddate><creator>Thach, Bao</creator><creator>Cho, Brian Y</creator><creator>Shing-Hei Ho</creator><creator>Hermans, Tucker</creator><creator>Kuntz, Alan</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20240219</creationdate><title>DeformerNet: Learning Bimanual Manipulation of 3D Deformable Objects</title><author>Thach, Bao ; Cho, Brian Y ; Shing-Hei Ho ; Hermans, Tucker ; Kuntz, Alan</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_28113577303</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Aorta</topic><topic>Cloud computing</topic><topic>Computer architecture</topic><topic>Deformation effects</topic><topic>Elastic deformation</topic><topic>End effectors</topic><topic>Formability</topic><topic>Mathematical models</topic><topic>Neural networks</topic><topic>Robots</topic><topic>Servocontrol</topic><topic>Shape control</topic><topic>Stiffness</topic><topic>Training</topic><toplevel>online_resources</toplevel><creatorcontrib>Thach, Bao</creatorcontrib><creatorcontrib>Cho, Brian Y</creatorcontrib><creatorcontrib>Shing-Hei Ho</creatorcontrib><creatorcontrib>Hermans, Tucker</creatorcontrib><creatorcontrib>Kuntz, Alan</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Thach, Bao</au><au>Cho, Brian Y</au><au>Shing-Hei Ho</au><au>Hermans, Tucker</au><au>Kuntz, Alan</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>DeformerNet: Learning Bimanual Manipulation of 3D Deformable Objects</atitle><jtitle>arXiv.org</jtitle><date>2024-02-19</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>Applications in fields ranging from home care to warehouse fulfillment to surgical assistance require robots to reliably manipulate the shape of 3D deformable objects. Analytic models of elastic, 3D deformable objects require numerous parameters to describe the potentially infinite degrees of freedom present in determining the object's shape. Previous attempts at performing 3D shape control rely on hand-crafted features to represent the object shape and require training of object-specific control models. We overcome these issues through the use of our novel DeformerNet neural network architecture, which operates on a partial-view point cloud of the manipulated object and a point cloud of the goal shape to learn a low-dimensional representation of the object shape. This shape embedding enables the robot to learn a visual servo controller that computes the desired robot end-effector action to iteratively deform the object toward the target shape. We demonstrate both in simulation and on a physical robot that DeformerNet reliably generalizes to object shapes and material stiffness not seen during training, including ex vivo chicken muscle tissue. Crucially, using DeformerNet, the robot successfully accomplishes three surgical sub-tasks: retraction (moving tissue aside to access a site underneath it), tissue wrapping (a sub-task in procedures like aortic stent placements), and connecting two tubular pieces of tissue (a sub-task in anastomosis).</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2024-02
issn 2331-8422
language eng
recordid cdi_proquest_journals_2811357730
source Free E- Journals
subjects Aorta
Cloud computing
Computer architecture
Deformation effects
Elastic deformation
End effectors
Formability
Mathematical models
Neural networks
Robots
Servocontrol
Shape control
Stiffness
Training
title DeformerNet: Learning Bimanual Manipulation of 3D Deformable Objects
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-27T02%3A09%3A01IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=DeformerNet:%20Learning%20Bimanual%20Manipulation%20of%203D%20Deformable%20Objects&rft.jtitle=arXiv.org&rft.au=Thach,%20Bao&rft.date=2024-02-19&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2811357730%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2811357730&rft_id=info:pmid/&rfr_iscdi=true