Learning from Demonstration Framework for Multi-Robot Systems Using Interaction Keypoints and Soft Actor-Critic Methods

Learning from Demonstration (LfD) is a promising approach to enable Multi-Robot Systems (MRS) to acquire complex skills and behaviors. However, the intricate interactions and coordination challenges in MRS pose significant hurdles for effective LfD. In this paper, we present a novel LfD framework sp...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Venkatesh, Vishnunandan L. N, Min, Byung-Cheol
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Venkatesh, Vishnunandan L. N
Min, Byung-Cheol
description Learning from Demonstration (LfD) is a promising approach to enable Multi-Robot Systems (MRS) to acquire complex skills and behaviors. However, the intricate interactions and coordination challenges in MRS pose significant hurdles for effective LfD. In this paper, we present a novel LfD framework specifically designed for MRS, which leverages visual demonstrations to capture and learn from robot-robot and robot-object interactions. Our framework introduces the concept of Interaction Keypoints (IKs) to transform the visual demonstrations into a representation that facilitates the inference of various skills necessary for the task. The robots then execute the task using sensorimotor actions and reinforcement learning (RL) policies when required. A key feature of our approach is the ability to handle unseen contact-based skills that emerge during the demonstration. In such cases, RL is employed to learn the skill using a classifier-based reward function, eliminating the need for manual reward engineering and ensuring adaptability to environmental changes. We evaluate our framework across a range of mobile robot tasks, covering both behavior-based and contact-based domains. The results demonstrate the effectiveness of our approach in enabling robots to learn complex multi-robot tasks and behaviors from visual demonstrations.
doi_str_mv 10.48550/arxiv.2404.02324
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2404_02324</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2404_02324</sourcerecordid><originalsourceid>FETCH-LOGICAL-a674-92acfbff4b518325f0f1ab64bfe127bed5c0c9e3bf136d746bf2922784fe81263</originalsourceid><addsrcrecordid>eNotkMtOwzAUBb1hgQofwAr_QEL8yGtZBQoVqZBoWUe24wsWjV1dG0r_HjWwOqs50gwhN6zIZVOWxZ3CH_edc1nIvOCCy0ty7K1C7_w7BQwTvbdT8DGhSi54ukI12WPATwoB6eZrn1z2GnRIdHuKyU6RvsUzuvbJojIz82xPh-B8ilT5kW4DJLo0KWDWoUvO0I1NH2GMV-QC1D7a6_9dkN3qYdc9Zf3L47pb9pmqapm1XBnQAFKXrBG8hAKY0pXUYBmvtR1LU5jWCg1MVGMtKw285bxuJNiG8UosyO3f7Ww-HNBNCk_DucAwFxC_W_FZBQ</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Learning from Demonstration Framework for Multi-Robot Systems Using Interaction Keypoints and Soft Actor-Critic Methods</title><source>arXiv.org</source><creator>Venkatesh, Vishnunandan L. N ; Min, Byung-Cheol</creator><creatorcontrib>Venkatesh, Vishnunandan L. N ; Min, Byung-Cheol</creatorcontrib><description>Learning from Demonstration (LfD) is a promising approach to enable Multi-Robot Systems (MRS) to acquire complex skills and behaviors. However, the intricate interactions and coordination challenges in MRS pose significant hurdles for effective LfD. In this paper, we present a novel LfD framework specifically designed for MRS, which leverages visual demonstrations to capture and learn from robot-robot and robot-object interactions. Our framework introduces the concept of Interaction Keypoints (IKs) to transform the visual demonstrations into a representation that facilitates the inference of various skills necessary for the task. The robots then execute the task using sensorimotor actions and reinforcement learning (RL) policies when required. A key feature of our approach is the ability to handle unseen contact-based skills that emerge during the demonstration. In such cases, RL is employed to learn the skill using a classifier-based reward function, eliminating the need for manual reward engineering and ensuring adaptability to environmental changes. We evaluate our framework across a range of mobile robot tasks, covering both behavior-based and contact-based domains. The results demonstrate the effectiveness of our approach in enabling robots to learn complex multi-robot tasks and behaviors from visual demonstrations.</description><identifier>DOI: 10.48550/arxiv.2404.02324</identifier><language>eng</language><subject>Computer Science - Robotics</subject><creationdate>2024-04</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2404.02324$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2404.02324$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Venkatesh, Vishnunandan L. N</creatorcontrib><creatorcontrib>Min, Byung-Cheol</creatorcontrib><title>Learning from Demonstration Framework for Multi-Robot Systems Using Interaction Keypoints and Soft Actor-Critic Methods</title><description>Learning from Demonstration (LfD) is a promising approach to enable Multi-Robot Systems (MRS) to acquire complex skills and behaviors. However, the intricate interactions and coordination challenges in MRS pose significant hurdles for effective LfD. In this paper, we present a novel LfD framework specifically designed for MRS, which leverages visual demonstrations to capture and learn from robot-robot and robot-object interactions. Our framework introduces the concept of Interaction Keypoints (IKs) to transform the visual demonstrations into a representation that facilitates the inference of various skills necessary for the task. The robots then execute the task using sensorimotor actions and reinforcement learning (RL) policies when required. A key feature of our approach is the ability to handle unseen contact-based skills that emerge during the demonstration. In such cases, RL is employed to learn the skill using a classifier-based reward function, eliminating the need for manual reward engineering and ensuring adaptability to environmental changes. We evaluate our framework across a range of mobile robot tasks, covering both behavior-based and contact-based domains. The results demonstrate the effectiveness of our approach in enabling robots to learn complex multi-robot tasks and behaviors from visual demonstrations.</description><subject>Computer Science - Robotics</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotkMtOwzAUBb1hgQofwAr_QEL8yGtZBQoVqZBoWUe24wsWjV1dG0r_HjWwOqs50gwhN6zIZVOWxZ3CH_edc1nIvOCCy0ty7K1C7_w7BQwTvbdT8DGhSi54ukI12WPATwoB6eZrn1z2GnRIdHuKyU6RvsUzuvbJojIz82xPh-B8ilT5kW4DJLo0KWDWoUvO0I1NH2GMV-QC1D7a6_9dkN3qYdc9Zf3L47pb9pmqapm1XBnQAFKXrBG8hAKY0pXUYBmvtR1LU5jWCg1MVGMtKw285bxuJNiG8UosyO3f7Ww-HNBNCk_DucAwFxC_W_FZBQ</recordid><startdate>20240402</startdate><enddate>20240402</enddate><creator>Venkatesh, Vishnunandan L. N</creator><creator>Min, Byung-Cheol</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240402</creationdate><title>Learning from Demonstration Framework for Multi-Robot Systems Using Interaction Keypoints and Soft Actor-Critic Methods</title><author>Venkatesh, Vishnunandan L. N ; Min, Byung-Cheol</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a674-92acfbff4b518325f0f1ab64bfe127bed5c0c9e3bf136d746bf2922784fe81263</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Robotics</topic><toplevel>online_resources</toplevel><creatorcontrib>Venkatesh, Vishnunandan L. N</creatorcontrib><creatorcontrib>Min, Byung-Cheol</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Venkatesh, Vishnunandan L. N</au><au>Min, Byung-Cheol</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Learning from Demonstration Framework for Multi-Robot Systems Using Interaction Keypoints and Soft Actor-Critic Methods</atitle><date>2024-04-02</date><risdate>2024</risdate><abstract>Learning from Demonstration (LfD) is a promising approach to enable Multi-Robot Systems (MRS) to acquire complex skills and behaviors. However, the intricate interactions and coordination challenges in MRS pose significant hurdles for effective LfD. In this paper, we present a novel LfD framework specifically designed for MRS, which leverages visual demonstrations to capture and learn from robot-robot and robot-object interactions. Our framework introduces the concept of Interaction Keypoints (IKs) to transform the visual demonstrations into a representation that facilitates the inference of various skills necessary for the task. The robots then execute the task using sensorimotor actions and reinforcement learning (RL) policies when required. A key feature of our approach is the ability to handle unseen contact-based skills that emerge during the demonstration. In such cases, RL is employed to learn the skill using a classifier-based reward function, eliminating the need for manual reward engineering and ensuring adaptability to environmental changes. We evaluate our framework across a range of mobile robot tasks, covering both behavior-based and contact-based domains. The results demonstrate the effectiveness of our approach in enabling robots to learn complex multi-robot tasks and behaviors from visual demonstrations.</abstract><doi>10.48550/arxiv.2404.02324</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2404.02324
ispartof
issn
language eng
recordid cdi_arxiv_primary_2404_02324
source arXiv.org
subjects Computer Science - Robotics
title Learning from Demonstration Framework for Multi-Robot Systems Using Interaction Keypoints and Soft Actor-Critic Methods
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-04T21%3A21%3A58IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Learning%20from%20Demonstration%20Framework%20for%20Multi-Robot%20Systems%20Using%20Interaction%20Keypoints%20and%20Soft%20Actor-Critic%20Methods&rft.au=Venkatesh,%20Vishnunandan%20L.%20N&rft.date=2024-04-02&rft_id=info:doi/10.48550/arxiv.2404.02324&rft_dat=%3Carxiv_GOX%3E2404_02324%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true