Bi-VLA: Vision-Language-Action Model-Based System for Bimanual Robotic Dexterous Manipulations
This research introduces the Bi-VLA (Vision-Language-Action) model, a novel system designed for bimanual robotic dexterous manipulation that seamlessly integrates vision for scene understanding, language comprehension for translating human instructions into executable code, and physical action gener...
Gespeichert in:
Veröffentlicht in: | arXiv.org 2024-08 |
---|---|
Hauptverfasser: | , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Gbagbe, Koffivi Fidèle Miguel Altamirano Cabrera Alabbas, Ali Alyunes, Oussama Lykov, Artem Tsetserukou, Dzmitry |
description | This research introduces the Bi-VLA (Vision-Language-Action) model, a novel system designed for bimanual robotic dexterous manipulation that seamlessly integrates vision for scene understanding, language comprehension for translating human instructions into executable code, and physical action generation. We evaluated the system's functionality through a series of household tasks, including the preparation of a desired salad upon human request. Bi-VLA demonstrates the ability to interpret complex human instructions, perceive and understand the visual context of ingredients, and execute precise bimanual actions to prepare the requested salad. We assessed the system's performance in terms of accuracy, efficiency, and adaptability to different salad recipes and human preferences through a series of experiments. Our results show a 100% success rate in generating the correct executable code by the Language Module, a 96.06% success rate in detecting specific ingredients by the Vision Module, and an overall success rate of 83.4% in correctly executing user-requested tasks. |
format | Article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_3054982523</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3054982523</sourcerecordid><originalsourceid>FETCH-proquest_journals_30549825233</originalsourceid><addsrcrecordid>eNqNjt8KgjAchUcQJOU7DLoe2KZl3Wl_6EJvKrxMlk6Z6Gb-Nqi3z6AH6Opw-M4HZ4IcytiKhD6lM-QCNJ7n0fWGBgFz0D2WJEuiHc4kSK1IwlVteS1IVJix41SXoiUxB1Hi6xuM6HClBxzLjivLW3zRD21kgQ_iZcSgLeCUK9nbln91WKBpxVsQ7i_naHk63vZn0g_6aQWYvNF2UCPKmRf425AG49n_Vh-yIENp</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3054982523</pqid></control><display><type>article</type><title>Bi-VLA: Vision-Language-Action Model-Based System for Bimanual Robotic Dexterous Manipulations</title><source>Free E- Journals</source><creator>Gbagbe, Koffivi Fidèle ; Miguel Altamirano Cabrera ; Alabbas, Ali ; Alyunes, Oussama ; Lykov, Artem ; Tsetserukou, Dzmitry</creator><creatorcontrib>Gbagbe, Koffivi Fidèle ; Miguel Altamirano Cabrera ; Alabbas, Ali ; Alyunes, Oussama ; Lykov, Artem ; Tsetserukou, Dzmitry</creatorcontrib><description>This research introduces the Bi-VLA (Vision-Language-Action) model, a novel system designed for bimanual robotic dexterous manipulation that seamlessly integrates vision for scene understanding, language comprehension for translating human instructions into executable code, and physical action generation. We evaluated the system's functionality through a series of household tasks, including the preparation of a desired salad upon human request. Bi-VLA demonstrates the ability to interpret complex human instructions, perceive and understand the visual context of ingredients, and execute precise bimanual actions to prepare the requested salad. We assessed the system's performance in terms of accuracy, efficiency, and adaptability to different salad recipes and human preferences through a series of experiments. Our results show a 100% success rate in generating the correct executable code by the Language Module, a 96.06% success rate in detecting specific ingredients by the Vision Module, and an overall success rate of 83.4% in correctly executing user-requested tasks.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Ingredients ; Model-based systems ; Modules ; Performance evaluation ; Vision</subject><ispartof>arXiv.org, 2024-08</ispartof><rights>2024. This work is published under http://creativecommons.org/licenses/by-nc-nd/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>777,781</link.rule.ids></links><search><creatorcontrib>Gbagbe, Koffivi Fidèle</creatorcontrib><creatorcontrib>Miguel Altamirano Cabrera</creatorcontrib><creatorcontrib>Alabbas, Ali</creatorcontrib><creatorcontrib>Alyunes, Oussama</creatorcontrib><creatorcontrib>Lykov, Artem</creatorcontrib><creatorcontrib>Tsetserukou, Dzmitry</creatorcontrib><title>Bi-VLA: Vision-Language-Action Model-Based System for Bimanual Robotic Dexterous Manipulations</title><title>arXiv.org</title><description>This research introduces the Bi-VLA (Vision-Language-Action) model, a novel system designed for bimanual robotic dexterous manipulation that seamlessly integrates vision for scene understanding, language comprehension for translating human instructions into executable code, and physical action generation. We evaluated the system's functionality through a series of household tasks, including the preparation of a desired salad upon human request. Bi-VLA demonstrates the ability to interpret complex human instructions, perceive and understand the visual context of ingredients, and execute precise bimanual actions to prepare the requested salad. We assessed the system's performance in terms of accuracy, efficiency, and adaptability to different salad recipes and human preferences through a series of experiments. Our results show a 100% success rate in generating the correct executable code by the Language Module, a 96.06% success rate in detecting specific ingredients by the Vision Module, and an overall success rate of 83.4% in correctly executing user-requested tasks.</description><subject>Ingredients</subject><subject>Model-based systems</subject><subject>Modules</subject><subject>Performance evaluation</subject><subject>Vision</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNqNjt8KgjAchUcQJOU7DLoe2KZl3Wl_6EJvKrxMlk6Z6Gb-Nqi3z6AH6Opw-M4HZ4IcytiKhD6lM-QCNJ7n0fWGBgFz0D2WJEuiHc4kSK1IwlVteS1IVJix41SXoiUxB1Hi6xuM6HClBxzLjivLW3zRD21kgQ_iZcSgLeCUK9nbln91WKBpxVsQ7i_naHk63vZn0g_6aQWYvNF2UCPKmRf425AG49n_Vh-yIENp</recordid><startdate>20240819</startdate><enddate>20240819</enddate><creator>Gbagbe, Koffivi Fidèle</creator><creator>Miguel Altamirano Cabrera</creator><creator>Alabbas, Ali</creator><creator>Alyunes, Oussama</creator><creator>Lykov, Artem</creator><creator>Tsetserukou, Dzmitry</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20240819</creationdate><title>Bi-VLA: Vision-Language-Action Model-Based System for Bimanual Robotic Dexterous Manipulations</title><author>Gbagbe, Koffivi Fidèle ; Miguel Altamirano Cabrera ; Alabbas, Ali ; Alyunes, Oussama ; Lykov, Artem ; Tsetserukou, Dzmitry</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_30549825233</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Ingredients</topic><topic>Model-based systems</topic><topic>Modules</topic><topic>Performance evaluation</topic><topic>Vision</topic><toplevel>online_resources</toplevel><creatorcontrib>Gbagbe, Koffivi Fidèle</creatorcontrib><creatorcontrib>Miguel Altamirano Cabrera</creatorcontrib><creatorcontrib>Alabbas, Ali</creatorcontrib><creatorcontrib>Alyunes, Oussama</creatorcontrib><creatorcontrib>Lykov, Artem</creatorcontrib><creatorcontrib>Tsetserukou, Dzmitry</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Gbagbe, Koffivi Fidèle</au><au>Miguel Altamirano Cabrera</au><au>Alabbas, Ali</au><au>Alyunes, Oussama</au><au>Lykov, Artem</au><au>Tsetserukou, Dzmitry</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Bi-VLA: Vision-Language-Action Model-Based System for Bimanual Robotic Dexterous Manipulations</atitle><jtitle>arXiv.org</jtitle><date>2024-08-19</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>This research introduces the Bi-VLA (Vision-Language-Action) model, a novel system designed for bimanual robotic dexterous manipulation that seamlessly integrates vision for scene understanding, language comprehension for translating human instructions into executable code, and physical action generation. We evaluated the system's functionality through a series of household tasks, including the preparation of a desired salad upon human request. Bi-VLA demonstrates the ability to interpret complex human instructions, perceive and understand the visual context of ingredients, and execute precise bimanual actions to prepare the requested salad. We assessed the system's performance in terms of accuracy, efficiency, and adaptability to different salad recipes and human preferences through a series of experiments. Our results show a 100% success rate in generating the correct executable code by the Language Module, a 96.06% success rate in detecting specific ingredients by the Vision Module, and an overall success rate of 83.4% in correctly executing user-requested tasks.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2024-08 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_3054982523 |
source | Free E- Journals |
subjects | Ingredients Model-based systems Modules Performance evaluation Vision |
title | Bi-VLA: Vision-Language-Action Model-Based System for Bimanual Robotic Dexterous Manipulations |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-20T11%3A23%3A02IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Bi-VLA:%20Vision-Language-Action%20Model-Based%20System%20for%20Bimanual%20Robotic%20Dexterous%20Manipulations&rft.jtitle=arXiv.org&rft.au=Gbagbe,%20Koffivi%20Fid%C3%A8le&rft.date=2024-08-19&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E3054982523%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3054982523&rft_id=info:pmid/&rfr_iscdi=true |