Multimodal Contextualized Plan Prediction for Embodied Task Completion
Task planning is an important component of traditional robotics systems enabling robots to compose fine grained skills to perform more complex tasks. Recent work building systems for translating natural language to executable actions for task completion in simulated embodied agents is focused on dir...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | İnan, Mert Padmakumar, Aishwarya Gella, Spandana Lange, Patrick Hakkani-Tur, Dilek |
description | Task planning is an important component of traditional robotics systems
enabling robots to compose fine grained skills to perform more complex tasks.
Recent work building systems for translating natural language to executable
actions for task completion in simulated embodied agents is focused on directly
predicting low level action sequences that would be expected to be directly
executable by a physical robot. In this work, we instead focus on predicting a
higher level plan representation for one such embodied task completion dataset
- TEACh, under the assumption that techniques for high-level plan prediction
from natural language are expected to be more transferable to physical robot
systems. We demonstrate that better plans can be predicted using multimodal
context, and that plan prediction and plan execution modules are likely
dependent on each other and hence it may not be ideal to fully decouple them.
Further, we benchmark execution of oracle plans to quantify the scope for
improvement in plan prediction models. |
doi_str_mv | 10.48550/arxiv.2305.06485 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2305_06485</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2305_06485</sourcerecordid><originalsourceid>FETCH-LOGICAL-a675-255561384b09b773feea821db738f6beacbb2102646533c8de476abf3a58e373</originalsourceid><addsrcrecordid>eNotj81Kw0AUhWfjQqoP4Mp5gcRJbuanyxJaFSoW7D7cydyBwUmmTFOpPr1pdXXgfJwDH2MPlSgbI6V4wnwOX2UNQpZCzdUt27yd4hSG5DDyNo0TnacTxvBDju8ijnyXyYV-CmnkPmW-HmxyYYZ7PH7Og-EQ6QLv2I3HeKT7_1ywj816374U2_fn13a1LVBpWdRSSlWBaaxYWq3BE6GpK2c1GK8sYW9tXYlaNUoC9MZRoxVaDygNgYYFe_x7vXp0hxwGzN_dxae7-sAv5gJF4w</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Multimodal Contextualized Plan Prediction for Embodied Task Completion</title><source>arXiv.org</source><creator>İnan, Mert ; Padmakumar, Aishwarya ; Gella, Spandana ; Lange, Patrick ; Hakkani-Tur, Dilek</creator><creatorcontrib>İnan, Mert ; Padmakumar, Aishwarya ; Gella, Spandana ; Lange, Patrick ; Hakkani-Tur, Dilek</creatorcontrib><description>Task planning is an important component of traditional robotics systems
enabling robots to compose fine grained skills to perform more complex tasks.
Recent work building systems for translating natural language to executable
actions for task completion in simulated embodied agents is focused on directly
predicting low level action sequences that would be expected to be directly
executable by a physical robot. In this work, we instead focus on predicting a
higher level plan representation for one such embodied task completion dataset
- TEACh, under the assumption that techniques for high-level plan prediction
from natural language are expected to be more transferable to physical robot
systems. We demonstrate that better plans can be predicted using multimodal
context, and that plan prediction and plan execution modules are likely
dependent on each other and hence it may not be ideal to fully decouple them.
Further, we benchmark execution of oracle plans to quantify the scope for
improvement in plan prediction models.</description><identifier>DOI: 10.48550/arxiv.2305.06485</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Computation and Language ; Computer Science - Human-Computer Interaction ; Computer Science - Robotics</subject><creationdate>2023-05</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2305.06485$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2305.06485$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>İnan, Mert</creatorcontrib><creatorcontrib>Padmakumar, Aishwarya</creatorcontrib><creatorcontrib>Gella, Spandana</creatorcontrib><creatorcontrib>Lange, Patrick</creatorcontrib><creatorcontrib>Hakkani-Tur, Dilek</creatorcontrib><title>Multimodal Contextualized Plan Prediction for Embodied Task Completion</title><description>Task planning is an important component of traditional robotics systems
enabling robots to compose fine grained skills to perform more complex tasks.
Recent work building systems for translating natural language to executable
actions for task completion in simulated embodied agents is focused on directly
predicting low level action sequences that would be expected to be directly
executable by a physical robot. In this work, we instead focus on predicting a
higher level plan representation for one such embodied task completion dataset
- TEACh, under the assumption that techniques for high-level plan prediction
from natural language are expected to be more transferable to physical robot
systems. We demonstrate that better plans can be predicted using multimodal
context, and that plan prediction and plan execution modules are likely
dependent on each other and hence it may not be ideal to fully decouple them.
Further, we benchmark execution of oracle plans to quantify the scope for
improvement in plan prediction models.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Computation and Language</subject><subject>Computer Science - Human-Computer Interaction</subject><subject>Computer Science - Robotics</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj81Kw0AUhWfjQqoP4Mp5gcRJbuanyxJaFSoW7D7cydyBwUmmTFOpPr1pdXXgfJwDH2MPlSgbI6V4wnwOX2UNQpZCzdUt27yd4hSG5DDyNo0TnacTxvBDju8ijnyXyYV-CmnkPmW-HmxyYYZ7PH7Og-EQ6QLv2I3HeKT7_1ywj816374U2_fn13a1LVBpWdRSSlWBaaxYWq3BE6GpK2c1GK8sYW9tXYlaNUoC9MZRoxVaDygNgYYFe_x7vXp0hxwGzN_dxae7-sAv5gJF4w</recordid><startdate>20230510</startdate><enddate>20230510</enddate><creator>İnan, Mert</creator><creator>Padmakumar, Aishwarya</creator><creator>Gella, Spandana</creator><creator>Lange, Patrick</creator><creator>Hakkani-Tur, Dilek</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20230510</creationdate><title>Multimodal Contextualized Plan Prediction for Embodied Task Completion</title><author>İnan, Mert ; Padmakumar, Aishwarya ; Gella, Spandana ; Lange, Patrick ; Hakkani-Tur, Dilek</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a675-255561384b09b773feea821db738f6beacbb2102646533c8de476abf3a58e373</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Computation and Language</topic><topic>Computer Science - Human-Computer Interaction</topic><topic>Computer Science - Robotics</topic><toplevel>online_resources</toplevel><creatorcontrib>İnan, Mert</creatorcontrib><creatorcontrib>Padmakumar, Aishwarya</creatorcontrib><creatorcontrib>Gella, Spandana</creatorcontrib><creatorcontrib>Lange, Patrick</creatorcontrib><creatorcontrib>Hakkani-Tur, Dilek</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>İnan, Mert</au><au>Padmakumar, Aishwarya</au><au>Gella, Spandana</au><au>Lange, Patrick</au><au>Hakkani-Tur, Dilek</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Multimodal Contextualized Plan Prediction for Embodied Task Completion</atitle><date>2023-05-10</date><risdate>2023</risdate><abstract>Task planning is an important component of traditional robotics systems
enabling robots to compose fine grained skills to perform more complex tasks.
Recent work building systems for translating natural language to executable
actions for task completion in simulated embodied agents is focused on directly
predicting low level action sequences that would be expected to be directly
executable by a physical robot. In this work, we instead focus on predicting a
higher level plan representation for one such embodied task completion dataset
- TEACh, under the assumption that techniques for high-level plan prediction
from natural language are expected to be more transferable to physical robot
systems. We demonstrate that better plans can be predicted using multimodal
context, and that plan prediction and plan execution modules are likely
dependent on each other and hence it may not be ideal to fully decouple them.
Further, we benchmark execution of oracle plans to quantify the scope for
improvement in plan prediction models.</abstract><doi>10.48550/arxiv.2305.06485</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2305.06485 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2305_06485 |
source | arXiv.org |
subjects | Computer Science - Artificial Intelligence Computer Science - Computation and Language Computer Science - Human-Computer Interaction Computer Science - Robotics |
title | Multimodal Contextualized Plan Prediction for Embodied Task Completion |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-11T20%3A09%3A53IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Multimodal%20Contextualized%20Plan%20Prediction%20for%20Embodied%20Task%20Completion&rft.au=%C4%B0nan,%20Mert&rft.date=2023-05-10&rft_id=info:doi/10.48550/arxiv.2305.06485&rft_dat=%3Carxiv_GOX%3E2305_06485%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |