GENOME: GenerativE Neuro-symbOlic visual reasoning by growing and reusing ModulEs

Recent works have shown that Large Language Models (LLMs) could empower traditional neuro-symbolic models via programming capabilities to translate language into module descriptions, thus achieving strong visual reasoning results while maintaining the model's transparency and efficiency. Howeve...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2023-11
Hauptverfasser: Chen, Zhenfang, Sun, Rui, Liu, Wenjun, Hong, Yining, Chuang Gan
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Chen, Zhenfang
Sun, Rui
Liu, Wenjun
Hong, Yining
Chuang Gan
description Recent works have shown that Large Language Models (LLMs) could empower traditional neuro-symbolic models via programming capabilities to translate language into module descriptions, thus achieving strong visual reasoning results while maintaining the model's transparency and efficiency. However, these models usually exhaustively generate the entire code snippet given each new instance of a task, which is extremely ineffective. We propose generative neuro-symbolic visual reasoning by growing and reusing modules. Specifically, our model consists of three unique stages, module initialization, module generation, and module execution. First, given a vision-language task, we adopt LLMs to examine whether we could reuse and grow over established modules to handle this new task. If not, we initialize a new module needed by the task and specify the inputs and outputs of this new module. After that, the new module is created by querying LLMs to generate corresponding code snippets that match the requirements. In order to get a better sense of the new module's ability, we treat few-shot training examples as test cases to see if our new module could pass these cases. If yes, the new module is added to the module library for future reuse. Finally, we evaluate the performance of our model on the testing set by executing the parsed programs with the newly made visual modules to get the results. We find the proposed model possesses several advantages. First, it performs competitively on standard tasks like visual question answering and referring expression comprehension; Second, the modules learned from one task can be seamlessly transferred to new tasks; Last but not least, it is able to adapt to new visual reasoning tasks by observing a few training examples and reusing modules.
format Article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2887708164</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2887708164</sourcerecordid><originalsourceid>FETCH-proquest_journals_28877081643</originalsourceid><addsrcrecordid>eNqNit0KgjAYQEcQJOU7DLoW5ubP6DaW3WgE3cvMJYpttc8Zvn0KPUBX58A5K-RRxsKAR5RukA_QEUJoktI4Zh66ZqK45OKAM6WVlUM7ClwoZ00A07O69O0djy042WOrJBjd6gZXE26s-SwqdT0HB4vnpna9gB1aP2QPyv9xi_YncTueg5c1b6dgKDvjrJ5TSTlPU8LDJGL_XV88uD7O</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2887708164</pqid></control><display><type>article</type><title>GENOME: GenerativE Neuro-symbOlic visual reasoning by growing and reusing ModulEs</title><source>Free E- Journals</source><creator>Chen, Zhenfang ; Sun, Rui ; Liu, Wenjun ; Hong, Yining ; Chuang Gan</creator><creatorcontrib>Chen, Zhenfang ; Sun, Rui ; Liu, Wenjun ; Hong, Yining ; Chuang Gan</creatorcontrib><description>Recent works have shown that Large Language Models (LLMs) could empower traditional neuro-symbolic models via programming capabilities to translate language into module descriptions, thus achieving strong visual reasoning results while maintaining the model's transparency and efficiency. However, these models usually exhaustively generate the entire code snippet given each new instance of a task, which is extremely ineffective. We propose generative neuro-symbolic visual reasoning by growing and reusing modules. Specifically, our model consists of three unique stages, module initialization, module generation, and module execution. First, given a vision-language task, we adopt LLMs to examine whether we could reuse and grow over established modules to handle this new task. If not, we initialize a new module needed by the task and specify the inputs and outputs of this new module. After that, the new module is created by querying LLMs to generate corresponding code snippets that match the requirements. In order to get a better sense of the new module's ability, we treat few-shot training examples as test cases to see if our new module could pass these cases. If yes, the new module is added to the module library for future reuse. Finally, we evaluate the performance of our model on the testing set by executing the parsed programs with the newly made visual modules to get the results. We find the proposed model possesses several advantages. First, it performs competitively on standard tasks like visual question answering and referring expression comprehension; Second, the modules learned from one task can be seamlessly transferred to new tasks; Last but not least, it is able to adapt to new visual reasoning tasks by observing a few training examples and reusing modules.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Large language models ; Model testing ; Module libraries ; Modules ; Reasoning ; Reuse ; Training ; Visual observation ; Visual tasks</subject><ispartof>arXiv.org, 2023-11</ispartof><rights>2023. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>780,784</link.rule.ids></links><search><creatorcontrib>Chen, Zhenfang</creatorcontrib><creatorcontrib>Sun, Rui</creatorcontrib><creatorcontrib>Liu, Wenjun</creatorcontrib><creatorcontrib>Hong, Yining</creatorcontrib><creatorcontrib>Chuang Gan</creatorcontrib><title>GENOME: GenerativE Neuro-symbOlic visual reasoning by growing and reusing ModulEs</title><title>arXiv.org</title><description>Recent works have shown that Large Language Models (LLMs) could empower traditional neuro-symbolic models via programming capabilities to translate language into module descriptions, thus achieving strong visual reasoning results while maintaining the model's transparency and efficiency. However, these models usually exhaustively generate the entire code snippet given each new instance of a task, which is extremely ineffective. We propose generative neuro-symbolic visual reasoning by growing and reusing modules. Specifically, our model consists of three unique stages, module initialization, module generation, and module execution. First, given a vision-language task, we adopt LLMs to examine whether we could reuse and grow over established modules to handle this new task. If not, we initialize a new module needed by the task and specify the inputs and outputs of this new module. After that, the new module is created by querying LLMs to generate corresponding code snippets that match the requirements. In order to get a better sense of the new module's ability, we treat few-shot training examples as test cases to see if our new module could pass these cases. If yes, the new module is added to the module library for future reuse. Finally, we evaluate the performance of our model on the testing set by executing the parsed programs with the newly made visual modules to get the results. We find the proposed model possesses several advantages. First, it performs competitively on standard tasks like visual question answering and referring expression comprehension; Second, the modules learned from one task can be seamlessly transferred to new tasks; Last but not least, it is able to adapt to new visual reasoning tasks by observing a few training examples and reusing modules.</description><subject>Large language models</subject><subject>Model testing</subject><subject>Module libraries</subject><subject>Modules</subject><subject>Reasoning</subject><subject>Reuse</subject><subject>Training</subject><subject>Visual observation</subject><subject>Visual tasks</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNqNit0KgjAYQEcQJOU7DLoW5ubP6DaW3WgE3cvMJYpttc8Zvn0KPUBX58A5K-RRxsKAR5RukA_QEUJoktI4Zh66ZqK45OKAM6WVlUM7ClwoZ00A07O69O0djy042WOrJBjd6gZXE26s-SwqdT0HB4vnpna9gB1aP2QPyv9xi_YncTueg5c1b6dgKDvjrJ5TSTlPU8LDJGL_XV88uD7O</recordid><startdate>20231108</startdate><enddate>20231108</enddate><creator>Chen, Zhenfang</creator><creator>Sun, Rui</creator><creator>Liu, Wenjun</creator><creator>Hong, Yining</creator><creator>Chuang Gan</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20231108</creationdate><title>GENOME: GenerativE Neuro-symbOlic visual reasoning by growing and reusing ModulEs</title><author>Chen, Zhenfang ; Sun, Rui ; Liu, Wenjun ; Hong, Yining ; Chuang Gan</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_28877081643</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Large language models</topic><topic>Model testing</topic><topic>Module libraries</topic><topic>Modules</topic><topic>Reasoning</topic><topic>Reuse</topic><topic>Training</topic><topic>Visual observation</topic><topic>Visual tasks</topic><toplevel>online_resources</toplevel><creatorcontrib>Chen, Zhenfang</creatorcontrib><creatorcontrib>Sun, Rui</creatorcontrib><creatorcontrib>Liu, Wenjun</creatorcontrib><creatorcontrib>Hong, Yining</creatorcontrib><creatorcontrib>Chuang Gan</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Access via ProQuest (Open Access)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Chen, Zhenfang</au><au>Sun, Rui</au><au>Liu, Wenjun</au><au>Hong, Yining</au><au>Chuang Gan</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>GENOME: GenerativE Neuro-symbOlic visual reasoning by growing and reusing ModulEs</atitle><jtitle>arXiv.org</jtitle><date>2023-11-08</date><risdate>2023</risdate><eissn>2331-8422</eissn><abstract>Recent works have shown that Large Language Models (LLMs) could empower traditional neuro-symbolic models via programming capabilities to translate language into module descriptions, thus achieving strong visual reasoning results while maintaining the model's transparency and efficiency. However, these models usually exhaustively generate the entire code snippet given each new instance of a task, which is extremely ineffective. We propose generative neuro-symbolic visual reasoning by growing and reusing modules. Specifically, our model consists of three unique stages, module initialization, module generation, and module execution. First, given a vision-language task, we adopt LLMs to examine whether we could reuse and grow over established modules to handle this new task. If not, we initialize a new module needed by the task and specify the inputs and outputs of this new module. After that, the new module is created by querying LLMs to generate corresponding code snippets that match the requirements. In order to get a better sense of the new module's ability, we treat few-shot training examples as test cases to see if our new module could pass these cases. If yes, the new module is added to the module library for future reuse. Finally, we evaluate the performance of our model on the testing set by executing the parsed programs with the newly made visual modules to get the results. We find the proposed model possesses several advantages. First, it performs competitively on standard tasks like visual question answering and referring expression comprehension; Second, the modules learned from one task can be seamlessly transferred to new tasks; Last but not least, it is able to adapt to new visual reasoning tasks by observing a few training examples and reusing modules.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2023-11
issn 2331-8422
language eng
recordid cdi_proquest_journals_2887708164
source Free E- Journals
subjects Large language models
Model testing
Module libraries
Modules
Reasoning
Reuse
Training
Visual observation
Visual tasks
title GENOME: GenerativE Neuro-symbOlic visual reasoning by growing and reusing ModulEs
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-24T00%3A56%3A14IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=GENOME:%20GenerativE%20Neuro-symbOlic%20visual%20reasoning%20by%20growing%20and%20reusing%20ModulEs&rft.jtitle=arXiv.org&rft.au=Chen,%20Zhenfang&rft.date=2023-11-08&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2887708164%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2887708164&rft_id=info:pmid/&rfr_iscdi=true