Prompt Tuning with Soft Context Sharing for Vision-Language Models
Vision-language models have recently shown great potential on many tasks in computer vision. Meanwhile, prior work demonstrates prompt tuning designed for vision-language models could acquire superior performance on few-shot image recognition compared to linear probe, a strong baseline. In practice,...
Gespeichert in:
Veröffentlicht in: | arXiv.org 2024-03 |
---|---|
Hauptverfasser: | , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Ding, Kun Wang, Ying Liu, Pengzhang Yu, Qiang Zhang, Haojian Xiang, Shiming Pan, Chunhong |
description | Vision-language models have recently shown great potential on many tasks in computer vision. Meanwhile, prior work demonstrates prompt tuning designed for vision-language models could acquire superior performance on few-shot image recognition compared to linear probe, a strong baseline. In practice, many few-shot tasks are inherently correlated, particularly within specialized domains. However, such information is overlooked previously. Inspired by the fact that modeling task relationship by multi-task learning can usually boost performance, we propose a novel method SoftCPT (Soft Context Sharing for Prompt Tuning) to tune pre-trained vision-language models on multiple target few-shot tasks jointly. Specifically, we design a task-shared meta network to generate prompt context for each task using task name together with a learnable task context as input. The parameters of this meta network as well as the task context are tuned on the joint training set of all tasks. As such, the prompt context of all tasks will be shared in a soft manner. Extensive experiments across four multi-task few-shot datasets covering 44 tasks and 1593 categories demonstrate that SoftCPT significantly outperforms single-task prompt tuning methods, highlighting the effectiveness of multi-task learning for vision-language prompt tuning. Code is available at https://github.com/kding1225/softcpt. |
format | Article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2708081528</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2708081528</sourcerecordid><originalsourceid>FETCH-proquest_journals_27080815283</originalsourceid><addsrcrecordid>eNqNyr0KwjAUQOEgCBbtOwScA-mNtZktioOC0OJaAqZtSs2t-UEfXwUfwOkM35mRBITImNwALEjq_cA5h20BeS4Ssrs4vE-B1tEa29GnCT2tsA20RBv0K9CqV-4rLTp6Nd6gZSdlu6g6Tc9406NfkXmrRq_TX5dkfdjX5ZFNDh9R-9AMGJ39UAMFl1xmOUjx3_UGngA5jQ</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2708081528</pqid></control><display><type>article</type><title>Prompt Tuning with Soft Context Sharing for Vision-Language Models</title><source>Free E- Journals</source><creator>Ding, Kun ; Wang, Ying ; Liu, Pengzhang ; Yu, Qiang ; Zhang, Haojian ; Xiang, Shiming ; Pan, Chunhong</creator><creatorcontrib>Ding, Kun ; Wang, Ying ; Liu, Pengzhang ; Yu, Qiang ; Zhang, Haojian ; Xiang, Shiming ; Pan, Chunhong</creatorcontrib><description>Vision-language models have recently shown great potential on many tasks in computer vision. Meanwhile, prior work demonstrates prompt tuning designed for vision-language models could acquire superior performance on few-shot image recognition compared to linear probe, a strong baseline. In practice, many few-shot tasks are inherently correlated, particularly within specialized domains. However, such information is overlooked previously. Inspired by the fact that modeling task relationship by multi-task learning can usually boost performance, we propose a novel method SoftCPT (Soft Context Sharing for Prompt Tuning) to tune pre-trained vision-language models on multiple target few-shot tasks jointly. Specifically, we design a task-shared meta network to generate prompt context for each task using task name together with a learnable task context as input. The parameters of this meta network as well as the task context are tuned on the joint training set of all tasks. As such, the prompt context of all tasks will be shared in a soft manner. Extensive experiments across four multi-task few-shot datasets covering 44 tasks and 1593 categories demonstrate that SoftCPT significantly outperforms single-task prompt tuning methods, highlighting the effectiveness of multi-task learning for vision-language prompt tuning. Code is available at https://github.com/kding1225/softcpt.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Computer vision ; Context ; Image acquisition ; Language ; Learning ; Object recognition ; Source code</subject><ispartof>arXiv.org, 2024-03</ispartof><rights>2024. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>780,784</link.rule.ids></links><search><creatorcontrib>Ding, Kun</creatorcontrib><creatorcontrib>Wang, Ying</creatorcontrib><creatorcontrib>Liu, Pengzhang</creatorcontrib><creatorcontrib>Yu, Qiang</creatorcontrib><creatorcontrib>Zhang, Haojian</creatorcontrib><creatorcontrib>Xiang, Shiming</creatorcontrib><creatorcontrib>Pan, Chunhong</creatorcontrib><title>Prompt Tuning with Soft Context Sharing for Vision-Language Models</title><title>arXiv.org</title><description>Vision-language models have recently shown great potential on many tasks in computer vision. Meanwhile, prior work demonstrates prompt tuning designed for vision-language models could acquire superior performance on few-shot image recognition compared to linear probe, a strong baseline. In practice, many few-shot tasks are inherently correlated, particularly within specialized domains. However, such information is overlooked previously. Inspired by the fact that modeling task relationship by multi-task learning can usually boost performance, we propose a novel method SoftCPT (Soft Context Sharing for Prompt Tuning) to tune pre-trained vision-language models on multiple target few-shot tasks jointly. Specifically, we design a task-shared meta network to generate prompt context for each task using task name together with a learnable task context as input. The parameters of this meta network as well as the task context are tuned on the joint training set of all tasks. As such, the prompt context of all tasks will be shared in a soft manner. Extensive experiments across four multi-task few-shot datasets covering 44 tasks and 1593 categories demonstrate that SoftCPT significantly outperforms single-task prompt tuning methods, highlighting the effectiveness of multi-task learning for vision-language prompt tuning. Code is available at https://github.com/kding1225/softcpt.</description><subject>Computer vision</subject><subject>Context</subject><subject>Image acquisition</subject><subject>Language</subject><subject>Learning</subject><subject>Object recognition</subject><subject>Source code</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNqNyr0KwjAUQOEgCBbtOwScA-mNtZktioOC0OJaAqZtSs2t-UEfXwUfwOkM35mRBITImNwALEjq_cA5h20BeS4Ssrs4vE-B1tEa29GnCT2tsA20RBv0K9CqV-4rLTp6Nd6gZSdlu6g6Tc9406NfkXmrRq_TX5dkfdjX5ZFNDh9R-9AMGJ39UAMFl1xmOUjx3_UGngA5jQ</recordid><startdate>20240331</startdate><enddate>20240331</enddate><creator>Ding, Kun</creator><creator>Wang, Ying</creator><creator>Liu, Pengzhang</creator><creator>Yu, Qiang</creator><creator>Zhang, Haojian</creator><creator>Xiang, Shiming</creator><creator>Pan, Chunhong</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20240331</creationdate><title>Prompt Tuning with Soft Context Sharing for Vision-Language Models</title><author>Ding, Kun ; Wang, Ying ; Liu, Pengzhang ; Yu, Qiang ; Zhang, Haojian ; Xiang, Shiming ; Pan, Chunhong</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_27080815283</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer vision</topic><topic>Context</topic><topic>Image acquisition</topic><topic>Language</topic><topic>Learning</topic><topic>Object recognition</topic><topic>Source code</topic><toplevel>online_resources</toplevel><creatorcontrib>Ding, Kun</creatorcontrib><creatorcontrib>Wang, Ying</creatorcontrib><creatorcontrib>Liu, Pengzhang</creatorcontrib><creatorcontrib>Yu, Qiang</creatorcontrib><creatorcontrib>Zhang, Haojian</creatorcontrib><creatorcontrib>Xiang, Shiming</creatorcontrib><creatorcontrib>Pan, Chunhong</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Ding, Kun</au><au>Wang, Ying</au><au>Liu, Pengzhang</au><au>Yu, Qiang</au><au>Zhang, Haojian</au><au>Xiang, Shiming</au><au>Pan, Chunhong</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Prompt Tuning with Soft Context Sharing for Vision-Language Models</atitle><jtitle>arXiv.org</jtitle><date>2024-03-31</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>Vision-language models have recently shown great potential on many tasks in computer vision. Meanwhile, prior work demonstrates prompt tuning designed for vision-language models could acquire superior performance on few-shot image recognition compared to linear probe, a strong baseline. In practice, many few-shot tasks are inherently correlated, particularly within specialized domains. However, such information is overlooked previously. Inspired by the fact that modeling task relationship by multi-task learning can usually boost performance, we propose a novel method SoftCPT (Soft Context Sharing for Prompt Tuning) to tune pre-trained vision-language models on multiple target few-shot tasks jointly. Specifically, we design a task-shared meta network to generate prompt context for each task using task name together with a learnable task context as input. The parameters of this meta network as well as the task context are tuned on the joint training set of all tasks. As such, the prompt context of all tasks will be shared in a soft manner. Extensive experiments across four multi-task few-shot datasets covering 44 tasks and 1593 categories demonstrate that SoftCPT significantly outperforms single-task prompt tuning methods, highlighting the effectiveness of multi-task learning for vision-language prompt tuning. Code is available at https://github.com/kding1225/softcpt.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2024-03 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_2708081528 |
source | Free E- Journals |
subjects | Computer vision Context Image acquisition Language Learning Object recognition Source code |
title | Prompt Tuning with Soft Context Sharing for Vision-Language Models |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-06T19%3A47%3A15IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Prompt%20Tuning%20with%20Soft%20Context%20Sharing%20for%20Vision-Language%20Models&rft.jtitle=arXiv.org&rft.au=Ding,%20Kun&rft.date=2024-03-31&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2708081528%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2708081528&rft_id=info:pmid/&rfr_iscdi=true |