Similarity-Aware Skill Reproduction based on Multi-Representational Learning from Demonstration

Learning from Demonstration (LfD) algorithms enable humans to teach new skills to robots through demonstrations. The learned skills can be robustly reproduced from the identical or near boundary conditions (e.g., initial point). However, when generalizing a learned skill over boundary conditions wit...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2024-06
Hauptverfasser: Hertel, Brendan, Ahmadzadeh, S Reza
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Hertel, Brendan
Ahmadzadeh, S Reza
description Learning from Demonstration (LfD) algorithms enable humans to teach new skills to robots through demonstrations. The learned skills can be robustly reproduced from the identical or near boundary conditions (e.g., initial point). However, when generalizing a learned skill over boundary conditions with higher variance, the similarity of the reproductions changes from one boundary condition to another, and a single LfD representation cannot preserve a consistent similarity across a generalization region. We propose a novel similarity-aware framework including multiple LfD representations and a similarity metric that can improve skill generalization by finding reproductions with the highest similarity values for a given boundary condition. Given a demonstration of the skill, our framework constructs a similarity region around a point of interest (e.g., initial point) by evaluating individual LfD representations using the similarity metric. Any point within this volume corresponds to a representation that reproduces the skill with the greatest similarity. We validate our multi-representational framework in three simulated and four sets of real-world experiments using a physical 6-DOF robot. We also evaluate 11 different similarity metrics and categorize them according to their biases in 286 simulated experiments.
doi_str_mv 10.48550/arxiv.2110.14817
format Article
fullrecord <record><control><sourceid>proquest_arxiv</sourceid><recordid>TN_cdi_arxiv_primary_2110_14817</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2588158911</sourcerecordid><originalsourceid>FETCH-LOGICAL-a957-9907a52b6dc2c56825348b6998e97eb2bfa51956be9ae7fd3829a8fe52cc85e93</originalsourceid><addsrcrecordid>eNotkEtPwzAQhC0kJKrSH8AJS5xTYjub2MeqPIoUhER7jzbJBrnkUewE6L8naTntaL_RamcYuxHhMtIA4T26X_u9lGJciEiL5ILNpFIi0JGUV2zh_T4MQxknEkDNWLa1ja3R2f4YrH7QEd9-2rrm73RwXTkUve1anqOnko_idah7G0yMPLU9ThRrnhK61rYfvHJdwx-o6VrfuxO9ZpcV1p4W_3POdk-Pu_UmSN-eX9arNEADSWBMmCDIPC4LWUCsJahI57ExmkxCucwrBGEgzskgJVWptDSoKwJZFBrIqDm7PZ89pc8OzjbojtnUQnZqYXTcnR1jrq-BfJ_tu8GN3_tMgtYCtBFC_QGHHGE_</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2588158911</pqid></control><display><type>article</type><title>Similarity-Aware Skill Reproduction based on Multi-Representational Learning from Demonstration</title><source>arXiv.org</source><source>Free E- Journals</source><creator>Hertel, Brendan ; Ahmadzadeh, S Reza</creator><creatorcontrib>Hertel, Brendan ; Ahmadzadeh, S Reza</creatorcontrib><description>Learning from Demonstration (LfD) algorithms enable humans to teach new skills to robots through demonstrations. The learned skills can be robustly reproduced from the identical or near boundary conditions (e.g., initial point). However, when generalizing a learned skill over boundary conditions with higher variance, the similarity of the reproductions changes from one boundary condition to another, and a single LfD representation cannot preserve a consistent similarity across a generalization region. We propose a novel similarity-aware framework including multiple LfD representations and a similarity metric that can improve skill generalization by finding reproductions with the highest similarity values for a given boundary condition. Given a demonstration of the skill, our framework constructs a similarity region around a point of interest (e.g., initial point) by evaluating individual LfD representations using the similarity metric. Any point within this volume corresponds to a representation that reproduces the skill with the greatest similarity. We validate our multi-representational framework in three simulated and four sets of real-world experiments using a physical 6-DOF robot. We also evaluate 11 different similarity metrics and categorize them according to their biases in 286 simulated experiments.</description><identifier>EISSN: 2331-8422</identifier><identifier>DOI: 10.48550/arxiv.2110.14817</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Algorithms ; Boundary conditions ; Computer Science - Robotics ; Machine learning ; Representations ; Robots ; Similarity ; Skills</subject><ispartof>arXiv.org, 2024-06</ispartof><rights>2024. This work is published under http://creativecommons.org/licenses/by-nc-sa/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><rights>http://creativecommons.org/licenses/by-nc-sa/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,780,881,27902</link.rule.ids><backlink>$$Uhttps://doi.org/10.1109/ICAR53236.2021.9659470$$DView published paper (Access to full text may be restricted)$$Hfree_for_read</backlink><backlink>$$Uhttps://doi.org/10.48550/arXiv.2110.14817$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Hertel, Brendan</creatorcontrib><creatorcontrib>Ahmadzadeh, S Reza</creatorcontrib><title>Similarity-Aware Skill Reproduction based on Multi-Representational Learning from Demonstration</title><title>arXiv.org</title><description>Learning from Demonstration (LfD) algorithms enable humans to teach new skills to robots through demonstrations. The learned skills can be robustly reproduced from the identical or near boundary conditions (e.g., initial point). However, when generalizing a learned skill over boundary conditions with higher variance, the similarity of the reproductions changes from one boundary condition to another, and a single LfD representation cannot preserve a consistent similarity across a generalization region. We propose a novel similarity-aware framework including multiple LfD representations and a similarity metric that can improve skill generalization by finding reproductions with the highest similarity values for a given boundary condition. Given a demonstration of the skill, our framework constructs a similarity region around a point of interest (e.g., initial point) by evaluating individual LfD representations using the similarity metric. Any point within this volume corresponds to a representation that reproduces the skill with the greatest similarity. We validate our multi-representational framework in three simulated and four sets of real-world experiments using a physical 6-DOF robot. We also evaluate 11 different similarity metrics and categorize them according to their biases in 286 simulated experiments.</description><subject>Algorithms</subject><subject>Boundary conditions</subject><subject>Computer Science - Robotics</subject><subject>Machine learning</subject><subject>Representations</subject><subject>Robots</subject><subject>Similarity</subject><subject>Skills</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>BENPR</sourceid><sourceid>GOX</sourceid><recordid>eNotkEtPwzAQhC0kJKrSH8AJS5xTYjub2MeqPIoUhER7jzbJBrnkUewE6L8naTntaL_RamcYuxHhMtIA4T26X_u9lGJciEiL5ILNpFIi0JGUV2zh_T4MQxknEkDNWLa1ja3R2f4YrH7QEd9-2rrm73RwXTkUve1anqOnko_idah7G0yMPLU9ThRrnhK61rYfvHJdwx-o6VrfuxO9ZpcV1p4W_3POdk-Pu_UmSN-eX9arNEADSWBMmCDIPC4LWUCsJahI57ExmkxCucwrBGEgzskgJVWptDSoKwJZFBrIqDm7PZ89pc8OzjbojtnUQnZqYXTcnR1jrq-BfJ_tu8GN3_tMgtYCtBFC_QGHHGE_</recordid><startdate>20240628</startdate><enddate>20240628</enddate><creator>Hertel, Brendan</creator><creator>Ahmadzadeh, S Reza</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240628</creationdate><title>Similarity-Aware Skill Reproduction based on Multi-Representational Learning from Demonstration</title><author>Hertel, Brendan ; Ahmadzadeh, S Reza</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a957-9907a52b6dc2c56825348b6998e97eb2bfa51956be9ae7fd3829a8fe52cc85e93</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Algorithms</topic><topic>Boundary conditions</topic><topic>Computer Science - Robotics</topic><topic>Machine learning</topic><topic>Representations</topic><topic>Robots</topic><topic>Similarity</topic><topic>Skills</topic><toplevel>online_resources</toplevel><creatorcontrib>Hertel, Brendan</creatorcontrib><creatorcontrib>Ahmadzadeh, S Reza</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection><collection>arXiv Computer Science</collection><collection>arXiv.org</collection><jtitle>arXiv.org</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Hertel, Brendan</au><au>Ahmadzadeh, S Reza</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Similarity-Aware Skill Reproduction based on Multi-Representational Learning from Demonstration</atitle><jtitle>arXiv.org</jtitle><date>2024-06-28</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>Learning from Demonstration (LfD) algorithms enable humans to teach new skills to robots through demonstrations. The learned skills can be robustly reproduced from the identical or near boundary conditions (e.g., initial point). However, when generalizing a learned skill over boundary conditions with higher variance, the similarity of the reproductions changes from one boundary condition to another, and a single LfD representation cannot preserve a consistent similarity across a generalization region. We propose a novel similarity-aware framework including multiple LfD representations and a similarity metric that can improve skill generalization by finding reproductions with the highest similarity values for a given boundary condition. Given a demonstration of the skill, our framework constructs a similarity region around a point of interest (e.g., initial point) by evaluating individual LfD representations using the similarity metric. Any point within this volume corresponds to a representation that reproduces the skill with the greatest similarity. We validate our multi-representational framework in three simulated and four sets of real-world experiments using a physical 6-DOF robot. We also evaluate 11 different similarity metrics and categorize them according to their biases in 286 simulated experiments.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><doi>10.48550/arxiv.2110.14817</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2024-06
issn 2331-8422
language eng
recordid cdi_arxiv_primary_2110_14817
source arXiv.org; Free E- Journals
subjects Algorithms
Boundary conditions
Computer Science - Robotics
Machine learning
Representations
Robots
Similarity
Skills
title Similarity-Aware Skill Reproduction based on Multi-Representational Learning from Demonstration
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-09T04%3A01%3A25IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_arxiv&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Similarity-Aware%20Skill%20Reproduction%20based%20on%20Multi-Representational%20Learning%20from%20Demonstration&rft.jtitle=arXiv.org&rft.au=Hertel,%20Brendan&rft.date=2024-06-28&rft.eissn=2331-8422&rft_id=info:doi/10.48550/arxiv.2110.14817&rft_dat=%3Cproquest_arxiv%3E2588158911%3C/proquest_arxiv%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2588158911&rft_id=info:pmid/&rfr_iscdi=true