GPT-4V(ision) for Robotics: Multimodal Task Planning From Human Demonstration
We introduce a pipeline that enhances a general-purpose Vision Language Model, GPT-4V(ision), to facilitate one-shot visual teaching for robotic manipulation. This system analyzes videos of humans performing tasks and outputs executable robot programs that incorporate insights into affordances. The...
Gespeichert in:
Veröffentlicht in: | IEEE robotics and automation letters 2024-11, Vol.9 (11), p.10567-10574 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 10574 |
---|---|
container_issue | 11 |
container_start_page | 10567 |
container_title | IEEE robotics and automation letters |
container_volume | 9 |
creator | Wake, Naoki Kanehira, Atsushi Sasabuchi, Kazuhiro Takamatsu, Jun Ikeuchi, Katsushi |
description | We introduce a pipeline that enhances a general-purpose Vision Language Model, GPT-4V(ision), to facilitate one-shot visual teaching for robotic manipulation. This system analyzes videos of humans performing tasks and outputs executable robot programs that incorporate insights into affordances. The process begins with GPT-4 V analyzing the videos to obtain textual explanations of environmental and action details. A GPT-4-based task planner then encodes these details into a symbolic task plan. Subsequently, vision systems spatially and temporally ground the task plan in the videos-objects are identified using an open-vocabulary object detector, and hand-object interactions are analyzed to pinpoint moments of grasping and releasing. This spatiotemporal grounding allows for the gathering of affordance information (e.g., grasp types, waypoints, and body postures) critical for robot execution. Experiments across various scenarios demonstrate the method's efficacy in enabling real robots to operate from one-shot human demonstrations. Meanwhile, quantitative tests have revealed instances of hallucination in GPT-4 V, highlighting the importance of incorporating human supervision within the pipeline. |
doi_str_mv | 10.1109/LRA.2024.3477090 |
format | Article |
fullrecord | <record><control><sourceid>proquest_ieee_</sourceid><recordid>TN_cdi_ieee_primary_10711245</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10711245</ieee_id><sourcerecordid>3117128646</sourcerecordid><originalsourceid>FETCH-LOGICAL-c217t-2e61c1dfe2625d3745a9f13a716431173b32a5ef69935cbae7f01d027680d0303</originalsourceid><addsrcrecordid>eNpNkMFPwjAUxhujiQS5e_DQxIsehu-1W8u8ERAwgUgIem3K1pnh1mK7HfzvHYEDp_cdvt_3kh8h9whDREhflpvxkAGLhzyWElK4Ij3GpYy4FOL6It-SQQh7AMCESZ4mPbKar7dR_PVUhtLZZ1o4Tzdu55oyC6901VZNWbtcV3Srww9dV9ra0n7TmXc1XbS1tnRqamdD43XTDdyRm0JXwQzOt08-Z2_bySJafszfJ-NllDGUTcSMwAzzwjDBkpzLONFpgVxLFDFHlHzHmU5MIdKUJ9lOG1kA5sCkGEEOHHifPJ52D979tiY0au9ab7uX6sgjG4lYdC04tTLvQvCmUAdf1tr_KQR19KY6b-roTZ29dcjDCSmNMRd1icjihP8Db1VmlQ</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3117128646</pqid></control><display><type>article</type><title>GPT-4V(ision) for Robotics: Multimodal Task Planning From Human Demonstration</title><source>IEEE Electronic Library (IEL)</source><creator>Wake, Naoki ; Kanehira, Atsushi ; Sasabuchi, Kazuhiro ; Takamatsu, Jun ; Ikeuchi, Katsushi</creator><creatorcontrib>Wake, Naoki ; Kanehira, Atsushi ; Sasabuchi, Kazuhiro ; Takamatsu, Jun ; Ikeuchi, Katsushi</creatorcontrib><description>We introduce a pipeline that enhances a general-purpose Vision Language Model, GPT-4V(ision), to facilitate one-shot visual teaching for robotic manipulation. This system analyzes videos of humans performing tasks and outputs executable robot programs that incorporate insights into affordances. The process begins with GPT-4 V analyzing the videos to obtain textual explanations of environmental and action details. A GPT-4-based task planner then encodes these details into a symbolic task plan. Subsequently, vision systems spatially and temporally ground the task plan in the videos-objects are identified using an open-vocabulary object detector, and hand-object interactions are analyzed to pinpoint moments of grasping and releasing. This spatiotemporal grounding allows for the gathering of affordance information (e.g., grasp types, waypoints, and body postures) critical for robot execution. Experiments across various scenarios demonstrate the method's efficacy in enabling real robots to operate from one-shot human demonstrations. Meanwhile, quantitative tests have revealed instances of hallucination in GPT-4 V, highlighting the importance of incorporating human supervision within the pipeline.</description><identifier>ISSN: 2377-3766</identifier><identifier>EISSN: 2377-3766</identifier><identifier>DOI: 10.1109/LRA.2024.3477090</identifier><identifier>CODEN: IRALC6</identifier><language>eng</language><publisher>Piscataway: IEEE</publisher><subject>Affordances ; Collision avoidance ; Data models ; Grasping (robotics) ; Grounding ; imitation learning ; Machine vision ; Pipelines ; Planning ; Robotics ; Robots ; Task and motion planning ; task planning ; Task planning (robotics) ; Training ; Video ; Vision systems ; Visual tasks ; Visualization</subject><ispartof>IEEE robotics and automation letters, 2024-11, Vol.9 (11), p.10567-10574</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2024</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c217t-2e61c1dfe2625d3745a9f13a716431173b32a5ef69935cbae7f01d027680d0303</cites><orcidid>0000-0001-9758-9357 ; 0000-0001-7457-2878 ; 0009-0005-0157-7500 ; 0000-0001-8278-2373 ; 0000-0002-5408-3089</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10711245$$EHTML$$P50$$Gieee$$Hfree_for_read</linktohtml><link.rule.ids>314,780,784,796,27924,27925,54758</link.rule.ids></links><search><creatorcontrib>Wake, Naoki</creatorcontrib><creatorcontrib>Kanehira, Atsushi</creatorcontrib><creatorcontrib>Sasabuchi, Kazuhiro</creatorcontrib><creatorcontrib>Takamatsu, Jun</creatorcontrib><creatorcontrib>Ikeuchi, Katsushi</creatorcontrib><title>GPT-4V(ision) for Robotics: Multimodal Task Planning From Human Demonstration</title><title>IEEE robotics and automation letters</title><addtitle>LRA</addtitle><description>We introduce a pipeline that enhances a general-purpose Vision Language Model, GPT-4V(ision), to facilitate one-shot visual teaching for robotic manipulation. This system analyzes videos of humans performing tasks and outputs executable robot programs that incorporate insights into affordances. The process begins with GPT-4 V analyzing the videos to obtain textual explanations of environmental and action details. A GPT-4-based task planner then encodes these details into a symbolic task plan. Subsequently, vision systems spatially and temporally ground the task plan in the videos-objects are identified using an open-vocabulary object detector, and hand-object interactions are analyzed to pinpoint moments of grasping and releasing. This spatiotemporal grounding allows for the gathering of affordance information (e.g., grasp types, waypoints, and body postures) critical for robot execution. Experiments across various scenarios demonstrate the method's efficacy in enabling real robots to operate from one-shot human demonstrations. Meanwhile, quantitative tests have revealed instances of hallucination in GPT-4 V, highlighting the importance of incorporating human supervision within the pipeline.</description><subject>Affordances</subject><subject>Collision avoidance</subject><subject>Data models</subject><subject>Grasping (robotics)</subject><subject>Grounding</subject><subject>imitation learning</subject><subject>Machine vision</subject><subject>Pipelines</subject><subject>Planning</subject><subject>Robotics</subject><subject>Robots</subject><subject>Task and motion planning</subject><subject>task planning</subject><subject>Task planning (robotics)</subject><subject>Training</subject><subject>Video</subject><subject>Vision systems</subject><subject>Visual tasks</subject><subject>Visualization</subject><issn>2377-3766</issn><issn>2377-3766</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>ESBDL</sourceid><sourceid>RIE</sourceid><recordid>eNpNkMFPwjAUxhujiQS5e_DQxIsehu-1W8u8ERAwgUgIem3K1pnh1mK7HfzvHYEDp_cdvt_3kh8h9whDREhflpvxkAGLhzyWElK4Ij3GpYy4FOL6It-SQQh7AMCESZ4mPbKar7dR_PVUhtLZZ1o4Tzdu55oyC6901VZNWbtcV3Srww9dV9ra0n7TmXc1XbS1tnRqamdD43XTDdyRm0JXwQzOt08-Z2_bySJafszfJ-NllDGUTcSMwAzzwjDBkpzLONFpgVxLFDFHlHzHmU5MIdKUJ9lOG1kA5sCkGEEOHHifPJ52D979tiY0au9ab7uX6sgjG4lYdC04tTLvQvCmUAdf1tr_KQR19KY6b-roTZ29dcjDCSmNMRd1icjihP8Db1VmlQ</recordid><startdate>20241101</startdate><enddate>20241101</enddate><creator>Wake, Naoki</creator><creator>Kanehira, Atsushi</creator><creator>Sasabuchi, Kazuhiro</creator><creator>Takamatsu, Jun</creator><creator>Ikeuchi, Katsushi</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>ESBDL</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><orcidid>https://orcid.org/0000-0001-9758-9357</orcidid><orcidid>https://orcid.org/0000-0001-7457-2878</orcidid><orcidid>https://orcid.org/0009-0005-0157-7500</orcidid><orcidid>https://orcid.org/0000-0001-8278-2373</orcidid><orcidid>https://orcid.org/0000-0002-5408-3089</orcidid></search><sort><creationdate>20241101</creationdate><title>GPT-4V(ision) for Robotics: Multimodal Task Planning From Human Demonstration</title><author>Wake, Naoki ; Kanehira, Atsushi ; Sasabuchi, Kazuhiro ; Takamatsu, Jun ; Ikeuchi, Katsushi</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c217t-2e61c1dfe2625d3745a9f13a716431173b32a5ef69935cbae7f01d027680d0303</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Affordances</topic><topic>Collision avoidance</topic><topic>Data models</topic><topic>Grasping (robotics)</topic><topic>Grounding</topic><topic>imitation learning</topic><topic>Machine vision</topic><topic>Pipelines</topic><topic>Planning</topic><topic>Robotics</topic><topic>Robots</topic><topic>Task and motion planning</topic><topic>task planning</topic><topic>Task planning (robotics)</topic><topic>Training</topic><topic>Video</topic><topic>Vision systems</topic><topic>Visual tasks</topic><topic>Visualization</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Wake, Naoki</creatorcontrib><creatorcontrib>Kanehira, Atsushi</creatorcontrib><creatorcontrib>Sasabuchi, Kazuhiro</creatorcontrib><creatorcontrib>Takamatsu, Jun</creatorcontrib><creatorcontrib>Ikeuchi, Katsushi</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE Open Access Journals</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics & Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>IEEE robotics and automation letters</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Wake, Naoki</au><au>Kanehira, Atsushi</au><au>Sasabuchi, Kazuhiro</au><au>Takamatsu, Jun</au><au>Ikeuchi, Katsushi</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>GPT-4V(ision) for Robotics: Multimodal Task Planning From Human Demonstration</atitle><jtitle>IEEE robotics and automation letters</jtitle><stitle>LRA</stitle><date>2024-11-01</date><risdate>2024</risdate><volume>9</volume><issue>11</issue><spage>10567</spage><epage>10574</epage><pages>10567-10574</pages><issn>2377-3766</issn><eissn>2377-3766</eissn><coden>IRALC6</coden><abstract>We introduce a pipeline that enhances a general-purpose Vision Language Model, GPT-4V(ision), to facilitate one-shot visual teaching for robotic manipulation. This system analyzes videos of humans performing tasks and outputs executable robot programs that incorporate insights into affordances. The process begins with GPT-4 V analyzing the videos to obtain textual explanations of environmental and action details. A GPT-4-based task planner then encodes these details into a symbolic task plan. Subsequently, vision systems spatially and temporally ground the task plan in the videos-objects are identified using an open-vocabulary object detector, and hand-object interactions are analyzed to pinpoint moments of grasping and releasing. This spatiotemporal grounding allows for the gathering of affordance information (e.g., grasp types, waypoints, and body postures) critical for robot execution. Experiments across various scenarios demonstrate the method's efficacy in enabling real robots to operate from one-shot human demonstrations. Meanwhile, quantitative tests have revealed instances of hallucination in GPT-4 V, highlighting the importance of incorporating human supervision within the pipeline.</abstract><cop>Piscataway</cop><pub>IEEE</pub><doi>10.1109/LRA.2024.3477090</doi><tpages>8</tpages><orcidid>https://orcid.org/0000-0001-9758-9357</orcidid><orcidid>https://orcid.org/0000-0001-7457-2878</orcidid><orcidid>https://orcid.org/0009-0005-0157-7500</orcidid><orcidid>https://orcid.org/0000-0001-8278-2373</orcidid><orcidid>https://orcid.org/0000-0002-5408-3089</orcidid><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 2377-3766 |
ispartof | IEEE robotics and automation letters, 2024-11, Vol.9 (11), p.10567-10574 |
issn | 2377-3766 2377-3766 |
language | eng |
recordid | cdi_ieee_primary_10711245 |
source | IEEE Electronic Library (IEL) |
subjects | Affordances Collision avoidance Data models Grasping (robotics) Grounding imitation learning Machine vision Pipelines Planning Robotics Robots Task and motion planning task planning Task planning (robotics) Training Video Vision systems Visual tasks Visualization |
title | GPT-4V(ision) for Robotics: Multimodal Task Planning From Human Demonstration |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-22T02%3A50%3A00IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_ieee_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=GPT-4V(ision)%20for%20Robotics:%20Multimodal%20Task%20Planning%20From%20Human%20Demonstration&rft.jtitle=IEEE%20robotics%20and%20automation%20letters&rft.au=Wake,%20Naoki&rft.date=2024-11-01&rft.volume=9&rft.issue=11&rft.spage=10567&rft.epage=10574&rft.pages=10567-10574&rft.issn=2377-3766&rft.eissn=2377-3766&rft.coden=IRALC6&rft_id=info:doi/10.1109/LRA.2024.3477090&rft_dat=%3Cproquest_ieee_%3E3117128646%3C/proquest_ieee_%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3117128646&rft_id=info:pmid/&rft_ieee_id=10711245&rfr_iscdi=true |