GeoSACS: Geometric Shared Autonomy via Canal Surfaces

We introduce GeoSACS, a geometric framework for shared autonomy (SA). In variable environments, SA methods can be used to combine robotic capabilities with real-time human input in a way that offloads the physical task from the human. To remain intuitive, it can be helpful to simplify requirements f...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2024-04
Hauptverfasser: Rajapakshe, Shalutha, Dastenavar, Atharva, Hagenow, Michael, Jean-Marc Odobez, Senft, Emmanuel
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Rajapakshe, Shalutha
Dastenavar, Atharva
Hagenow, Michael
Jean-Marc Odobez
Senft, Emmanuel
description We introduce GeoSACS, a geometric framework for shared autonomy (SA). In variable environments, SA methods can be used to combine robotic capabilities with real-time human input in a way that offloads the physical task from the human. To remain intuitive, it can be helpful to simplify requirements for human input (i.e., reduce the dimensionality), which create challenges for to map low-dimensional human inputs to the higher dimensional control space of robots without requiring large amounts of data. We built GeoSACS on canal surfaces, a geometric framework that represents potential robot trajectories as a canal from as few as two demonstrations. GeoSACS maps user corrections on the cross-sections of this canal to provide an efficient SA framework. We extend canal surfaces to consider orientation and update the control frames to support intuitive mapping from user input to robot motions. Finally, we demonstrate GeoSACS in two preliminary studies, including a complex manipulation task where a robot loads laundry into a washer.
format Article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_3039630211</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3039630211</sourcerecordid><originalsourceid>FETCH-proquest_journals_30396302113</originalsourceid><addsrcrecordid>eNpjYuA0MjY21LUwMTLiYOAtLs4yMDAwMjM3MjU15mQwdU_ND3Z0DrZSADJyU0uKMpMVgjMSi1JTFBxLS_Lz8nMrFcoyExWcE_MScxSCS4vSEpNTi3kYWNMSc4pTeaE0N4Oym2uIs4duQVF-YWlqcUl8Vn5pEVBHcbyxgbGlmbGBkaGhMXGqAAxyM94</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3039630211</pqid></control><display><type>article</type><title>GeoSACS: Geometric Shared Autonomy via Canal Surfaces</title><source>Free E- Journals</source><creator>Rajapakshe, Shalutha ; Dastenavar, Atharva ; Hagenow, Michael ; Jean-Marc Odobez ; Senft, Emmanuel</creator><creatorcontrib>Rajapakshe, Shalutha ; Dastenavar, Atharva ; Hagenow, Michael ; Jean-Marc Odobez ; Senft, Emmanuel</creatorcontrib><description>We introduce GeoSACS, a geometric framework for shared autonomy (SA). In variable environments, SA methods can be used to combine robotic capabilities with real-time human input in a way that offloads the physical task from the human. To remain intuitive, it can be helpful to simplify requirements for human input (i.e., reduce the dimensionality), which create challenges for to map low-dimensional human inputs to the higher dimensional control space of robots without requiring large amounts of data. We built GeoSACS on canal surfaces, a geometric framework that represents potential robot trajectories as a canal from as few as two demonstrations. GeoSACS maps user corrections on the cross-sections of this canal to provide an efficient SA framework. We extend canal surfaces to consider orientation and update the control frames to support intuitive mapping from user input to robot motions. Finally, we demonstrate GeoSACS in two preliminary studies, including a complex manipulation task where a robot loads laundry into a washer.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Autonomy ; Robot control ; Robots</subject><ispartof>arXiv.org, 2024-04</ispartof><rights>2024. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>780,784</link.rule.ids></links><search><creatorcontrib>Rajapakshe, Shalutha</creatorcontrib><creatorcontrib>Dastenavar, Atharva</creatorcontrib><creatorcontrib>Hagenow, Michael</creatorcontrib><creatorcontrib>Jean-Marc Odobez</creatorcontrib><creatorcontrib>Senft, Emmanuel</creatorcontrib><title>GeoSACS: Geometric Shared Autonomy via Canal Surfaces</title><title>arXiv.org</title><description>We introduce GeoSACS, a geometric framework for shared autonomy (SA). In variable environments, SA methods can be used to combine robotic capabilities with real-time human input in a way that offloads the physical task from the human. To remain intuitive, it can be helpful to simplify requirements for human input (i.e., reduce the dimensionality), which create challenges for to map low-dimensional human inputs to the higher dimensional control space of robots without requiring large amounts of data. We built GeoSACS on canal surfaces, a geometric framework that represents potential robot trajectories as a canal from as few as two demonstrations. GeoSACS maps user corrections on the cross-sections of this canal to provide an efficient SA framework. We extend canal surfaces to consider orientation and update the control frames to support intuitive mapping from user input to robot motions. Finally, we demonstrate GeoSACS in two preliminary studies, including a complex manipulation task where a robot loads laundry into a washer.</description><subject>Autonomy</subject><subject>Robot control</subject><subject>Robots</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNpjYuA0MjY21LUwMTLiYOAtLs4yMDAwMjM3MjU15mQwdU_ND3Z0DrZSADJyU0uKMpMVgjMSi1JTFBxLS_Lz8nMrFcoyExWcE_MScxSCS4vSEpNTi3kYWNMSc4pTeaE0N4Oym2uIs4duQVF-YWlqcUl8Vn5pEVBHcbyxgbGlmbGBkaGhMXGqAAxyM94</recordid><startdate>20240415</startdate><enddate>20240415</enddate><creator>Rajapakshe, Shalutha</creator><creator>Dastenavar, Atharva</creator><creator>Hagenow, Michael</creator><creator>Jean-Marc Odobez</creator><creator>Senft, Emmanuel</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20240415</creationdate><title>GeoSACS: Geometric Shared Autonomy via Canal Surfaces</title><author>Rajapakshe, Shalutha ; Dastenavar, Atharva ; Hagenow, Michael ; Jean-Marc Odobez ; Senft, Emmanuel</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_30396302113</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Autonomy</topic><topic>Robot control</topic><topic>Robots</topic><toplevel>online_resources</toplevel><creatorcontrib>Rajapakshe, Shalutha</creatorcontrib><creatorcontrib>Dastenavar, Atharva</creatorcontrib><creatorcontrib>Hagenow, Michael</creatorcontrib><creatorcontrib>Jean-Marc Odobez</creatorcontrib><creatorcontrib>Senft, Emmanuel</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Rajapakshe, Shalutha</au><au>Dastenavar, Atharva</au><au>Hagenow, Michael</au><au>Jean-Marc Odobez</au><au>Senft, Emmanuel</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>GeoSACS: Geometric Shared Autonomy via Canal Surfaces</atitle><jtitle>arXiv.org</jtitle><date>2024-04-15</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>We introduce GeoSACS, a geometric framework for shared autonomy (SA). In variable environments, SA methods can be used to combine robotic capabilities with real-time human input in a way that offloads the physical task from the human. To remain intuitive, it can be helpful to simplify requirements for human input (i.e., reduce the dimensionality), which create challenges for to map low-dimensional human inputs to the higher dimensional control space of robots without requiring large amounts of data. We built GeoSACS on canal surfaces, a geometric framework that represents potential robot trajectories as a canal from as few as two demonstrations. GeoSACS maps user corrections on the cross-sections of this canal to provide an efficient SA framework. We extend canal surfaces to consider orientation and update the control frames to support intuitive mapping from user input to robot motions. Finally, we demonstrate GeoSACS in two preliminary studies, including a complex manipulation task where a robot loads laundry into a washer.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2024-04
issn 2331-8422
language eng
recordid cdi_proquest_journals_3039630211
source Free E- Journals
subjects Autonomy
Robot control
Robots
title GeoSACS: Geometric Shared Autonomy via Canal Surfaces
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-07T01%3A08%3A29IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=GeoSACS:%20Geometric%20Shared%20Autonomy%20via%20Canal%20Surfaces&rft.jtitle=arXiv.org&rft.au=Rajapakshe,%20Shalutha&rft.date=2024-04-15&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E3039630211%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3039630211&rft_id=info:pmid/&rfr_iscdi=true