GLOV: Guided Large Language Models as Implicit Optimizers for Vision Language Models

In this work, we propose a novel method (GLOV) enabling Large Language Models (LLMs) to act as implicit Optimizers for Vision-Langugage Models (VLMs) to enhance downstream vision tasks. Our GLOV meta-prompts an LLM with the downstream task description, querying it for suitable VLM prompts (e.g., for...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Mirza, M. Jehanzeb, Zhao, Mengjie, Mao, Zhuoyuan, Doveh, Sivan, Lin, Wei, Gavrikov, Paul, Dorkenwald, Michael, Yang, Shiqi, Jha, Saurav, Wakaki, Hiromi, Mitsufuji, Yuki, Possegger, Horst, Feris, Rogerio, Karlinsky, Leonid, Glass, James
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Mirza, M. Jehanzeb
Zhao, Mengjie
Mao, Zhuoyuan
Doveh, Sivan
Lin, Wei
Gavrikov, Paul
Dorkenwald, Michael
Yang, Shiqi
Jha, Saurav
Wakaki, Hiromi
Mitsufuji, Yuki
Possegger, Horst
Feris, Rogerio
Karlinsky, Leonid
Glass, James
description In this work, we propose a novel method (GLOV) enabling Large Language Models (LLMs) to act as implicit Optimizers for Vision-Langugage Models (VLMs) to enhance downstream vision tasks. Our GLOV meta-prompts an LLM with the downstream task description, querying it for suitable VLM prompts (e.g., for zero-shot classification with CLIP). These prompts are ranked according to a purity measure obtained through a fitness function. In each respective optimization step, the ranked prompts are fed as in-context examples (with their accuracies) to equip the LLM with the knowledge of the type of text prompts preferred by the downstream VLM. Furthermore, we also explicitly steer the LLM generation process in each optimization step by specifically adding an offset difference vector of the embeddings from the positive and negative solutions found by the LLM, in previous optimization steps, to the intermediate layer of the network for the next generation step. This offset vector steers the LLM generation toward the type of language preferred by the downstream VLM, resulting in enhanced performance on the downstream vision tasks. We comprehensively evaluate our GLOV on 16 diverse datasets using two families of VLMs, i.e., dual-encoder (e.g., CLIP) and encoder-decoder (e.g., LLaVa) models -- showing that the discovered solutions can enhance the recognition performance by up to 15.0% and 57.5% (3.8% and 21.6% on average) for these models.
doi_str_mv 10.48550/arxiv.2410.06154
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2410_06154</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2410_06154</sourcerecordid><originalsourceid>FETCH-arxiv_primary_2410_061543</originalsourceid><addsrcrecordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMgEKGJgZmppwMoS4-_iHWSm4l2ampKYo-CQWpacCybz00kQgwzc_JTWnWCGxWMEztyAnMzmzRMG_oCQzN7MqtahYIS2_SCEsszgzPw9dBw8Da1piTnEqL5TmZpB3cw1x9tAF2x9fUJSZm1hUGQ9yRzzYHcaEVQAAXho88A</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>GLOV: Guided Large Language Models as Implicit Optimizers for Vision Language Models</title><source>arXiv.org</source><creator>Mirza, M. Jehanzeb ; Zhao, Mengjie ; Mao, Zhuoyuan ; Doveh, Sivan ; Lin, Wei ; Gavrikov, Paul ; Dorkenwald, Michael ; Yang, Shiqi ; Jha, Saurav ; Wakaki, Hiromi ; Mitsufuji, Yuki ; Possegger, Horst ; Feris, Rogerio ; Karlinsky, Leonid ; Glass, James</creator><creatorcontrib>Mirza, M. Jehanzeb ; Zhao, Mengjie ; Mao, Zhuoyuan ; Doveh, Sivan ; Lin, Wei ; Gavrikov, Paul ; Dorkenwald, Michael ; Yang, Shiqi ; Jha, Saurav ; Wakaki, Hiromi ; Mitsufuji, Yuki ; Possegger, Horst ; Feris, Rogerio ; Karlinsky, Leonid ; Glass, James</creatorcontrib><description>In this work, we propose a novel method (GLOV) enabling Large Language Models (LLMs) to act as implicit Optimizers for Vision-Langugage Models (VLMs) to enhance downstream vision tasks. Our GLOV meta-prompts an LLM with the downstream task description, querying it for suitable VLM prompts (e.g., for zero-shot classification with CLIP). These prompts are ranked according to a purity measure obtained through a fitness function. In each respective optimization step, the ranked prompts are fed as in-context examples (with their accuracies) to equip the LLM with the knowledge of the type of text prompts preferred by the downstream VLM. Furthermore, we also explicitly steer the LLM generation process in each optimization step by specifically adding an offset difference vector of the embeddings from the positive and negative solutions found by the LLM, in previous optimization steps, to the intermediate layer of the network for the next generation step. This offset vector steers the LLM generation toward the type of language preferred by the downstream VLM, resulting in enhanced performance on the downstream vision tasks. We comprehensively evaluate our GLOV on 16 diverse datasets using two families of VLMs, i.e., dual-encoder (e.g., CLIP) and encoder-decoder (e.g., LLaVa) models -- showing that the discovered solutions can enhance the recognition performance by up to 15.0% and 57.5% (3.8% and 21.6% on average) for these models.</description><identifier>DOI: 10.48550/arxiv.2410.06154</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition</subject><creationdate>2024-10</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2410.06154$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2410.06154$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Mirza, M. Jehanzeb</creatorcontrib><creatorcontrib>Zhao, Mengjie</creatorcontrib><creatorcontrib>Mao, Zhuoyuan</creatorcontrib><creatorcontrib>Doveh, Sivan</creatorcontrib><creatorcontrib>Lin, Wei</creatorcontrib><creatorcontrib>Gavrikov, Paul</creatorcontrib><creatorcontrib>Dorkenwald, Michael</creatorcontrib><creatorcontrib>Yang, Shiqi</creatorcontrib><creatorcontrib>Jha, Saurav</creatorcontrib><creatorcontrib>Wakaki, Hiromi</creatorcontrib><creatorcontrib>Mitsufuji, Yuki</creatorcontrib><creatorcontrib>Possegger, Horst</creatorcontrib><creatorcontrib>Feris, Rogerio</creatorcontrib><creatorcontrib>Karlinsky, Leonid</creatorcontrib><creatorcontrib>Glass, James</creatorcontrib><title>GLOV: Guided Large Language Models as Implicit Optimizers for Vision Language Models</title><description>In this work, we propose a novel method (GLOV) enabling Large Language Models (LLMs) to act as implicit Optimizers for Vision-Langugage Models (VLMs) to enhance downstream vision tasks. Our GLOV meta-prompts an LLM with the downstream task description, querying it for suitable VLM prompts (e.g., for zero-shot classification with CLIP). These prompts are ranked according to a purity measure obtained through a fitness function. In each respective optimization step, the ranked prompts are fed as in-context examples (with their accuracies) to equip the LLM with the knowledge of the type of text prompts preferred by the downstream VLM. Furthermore, we also explicitly steer the LLM generation process in each optimization step by specifically adding an offset difference vector of the embeddings from the positive and negative solutions found by the LLM, in previous optimization steps, to the intermediate layer of the network for the next generation step. This offset vector steers the LLM generation toward the type of language preferred by the downstream VLM, resulting in enhanced performance on the downstream vision tasks. We comprehensively evaluate our GLOV on 16 diverse datasets using two families of VLMs, i.e., dual-encoder (e.g., CLIP) and encoder-decoder (e.g., LLaVa) models -- showing that the discovered solutions can enhance the recognition performance by up to 15.0% and 57.5% (3.8% and 21.6% on average) for these models.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMgEKGJgZmppwMoS4-_iHWSm4l2ampKYo-CQWpacCybz00kQgwzc_JTWnWCGxWMEztyAnMzmzRMG_oCQzN7MqtahYIS2_SCEsszgzPw9dBw8Da1piTnEqL5TmZpB3cw1x9tAF2x9fUJSZm1hUGQ9yRzzYHcaEVQAAXho88A</recordid><startdate>20241008</startdate><enddate>20241008</enddate><creator>Mirza, M. Jehanzeb</creator><creator>Zhao, Mengjie</creator><creator>Mao, Zhuoyuan</creator><creator>Doveh, Sivan</creator><creator>Lin, Wei</creator><creator>Gavrikov, Paul</creator><creator>Dorkenwald, Michael</creator><creator>Yang, Shiqi</creator><creator>Jha, Saurav</creator><creator>Wakaki, Hiromi</creator><creator>Mitsufuji, Yuki</creator><creator>Possegger, Horst</creator><creator>Feris, Rogerio</creator><creator>Karlinsky, Leonid</creator><creator>Glass, James</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20241008</creationdate><title>GLOV: Guided Large Language Models as Implicit Optimizers for Vision Language Models</title><author>Mirza, M. Jehanzeb ; Zhao, Mengjie ; Mao, Zhuoyuan ; Doveh, Sivan ; Lin, Wei ; Gavrikov, Paul ; Dorkenwald, Michael ; Yang, Shiqi ; Jha, Saurav ; Wakaki, Hiromi ; Mitsufuji, Yuki ; Possegger, Horst ; Feris, Rogerio ; Karlinsky, Leonid ; Glass, James</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-arxiv_primary_2410_061543</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Mirza, M. Jehanzeb</creatorcontrib><creatorcontrib>Zhao, Mengjie</creatorcontrib><creatorcontrib>Mao, Zhuoyuan</creatorcontrib><creatorcontrib>Doveh, Sivan</creatorcontrib><creatorcontrib>Lin, Wei</creatorcontrib><creatorcontrib>Gavrikov, Paul</creatorcontrib><creatorcontrib>Dorkenwald, Michael</creatorcontrib><creatorcontrib>Yang, Shiqi</creatorcontrib><creatorcontrib>Jha, Saurav</creatorcontrib><creatorcontrib>Wakaki, Hiromi</creatorcontrib><creatorcontrib>Mitsufuji, Yuki</creatorcontrib><creatorcontrib>Possegger, Horst</creatorcontrib><creatorcontrib>Feris, Rogerio</creatorcontrib><creatorcontrib>Karlinsky, Leonid</creatorcontrib><creatorcontrib>Glass, James</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Mirza, M. Jehanzeb</au><au>Zhao, Mengjie</au><au>Mao, Zhuoyuan</au><au>Doveh, Sivan</au><au>Lin, Wei</au><au>Gavrikov, Paul</au><au>Dorkenwald, Michael</au><au>Yang, Shiqi</au><au>Jha, Saurav</au><au>Wakaki, Hiromi</au><au>Mitsufuji, Yuki</au><au>Possegger, Horst</au><au>Feris, Rogerio</au><au>Karlinsky, Leonid</au><au>Glass, James</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>GLOV: Guided Large Language Models as Implicit Optimizers for Vision Language Models</atitle><date>2024-10-08</date><risdate>2024</risdate><abstract>In this work, we propose a novel method (GLOV) enabling Large Language Models (LLMs) to act as implicit Optimizers for Vision-Langugage Models (VLMs) to enhance downstream vision tasks. Our GLOV meta-prompts an LLM with the downstream task description, querying it for suitable VLM prompts (e.g., for zero-shot classification with CLIP). These prompts are ranked according to a purity measure obtained through a fitness function. In each respective optimization step, the ranked prompts are fed as in-context examples (with their accuracies) to equip the LLM with the knowledge of the type of text prompts preferred by the downstream VLM. Furthermore, we also explicitly steer the LLM generation process in each optimization step by specifically adding an offset difference vector of the embeddings from the positive and negative solutions found by the LLM, in previous optimization steps, to the intermediate layer of the network for the next generation step. This offset vector steers the LLM generation toward the type of language preferred by the downstream VLM, resulting in enhanced performance on the downstream vision tasks. We comprehensively evaluate our GLOV on 16 diverse datasets using two families of VLMs, i.e., dual-encoder (e.g., CLIP) and encoder-decoder (e.g., LLaVa) models -- showing that the discovered solutions can enhance the recognition performance by up to 15.0% and 57.5% (3.8% and 21.6% on average) for these models.</abstract><doi>10.48550/arxiv.2410.06154</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2410.06154
ispartof
issn
language eng
recordid cdi_arxiv_primary_2410_06154
source arXiv.org
subjects Computer Science - Computer Vision and Pattern Recognition
title GLOV: Guided Large Language Models as Implicit Optimizers for Vision Language Models
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-11T06%3A39%3A10IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=GLOV:%20Guided%20Large%20Language%20Models%20as%20Implicit%20Optimizers%20for%20Vision%20Language%20Models&rft.au=Mirza,%20M.%20Jehanzeb&rft.date=2024-10-08&rft_id=info:doi/10.48550/arxiv.2410.06154&rft_dat=%3Carxiv_GOX%3E2410_06154%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true