UniBoost: Unsupervised Unimodal Pre-training for Boosting Zero-shot Vision-Language Tasks

Large-scale joint training of multimodal models, e.g., CLIP, have demonstrated great performance in many vision-language tasks. However, image-text pairs for pre-training are restricted to the intersection of images and texts, limiting their ability to cover a large distribution of real-world data,...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Sun, Yanan, Zhong, Zihan, Fan, Qi, Tang, Chi-Keung, Tai, Yu-Wing
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Sun, Yanan
Zhong, Zihan
Fan, Qi
Tang, Chi-Keung
Tai, Yu-Wing
description Large-scale joint training of multimodal models, e.g., CLIP, have demonstrated great performance in many vision-language tasks. However, image-text pairs for pre-training are restricted to the intersection of images and texts, limiting their ability to cover a large distribution of real-world data, where noise can also be introduced as misaligned pairs during pre-processing. Conversely, unimodal models trained on text or image data alone through unsupervised techniques can achieve broader coverage of diverse real-world data and are not constrained by the requirement of simultaneous presence of image and text. In this paper, we demonstrate that using large-scale unsupervised unimodal models as pre-training can enhance the zero-shot performance of image-text pair models. Our thorough studies validate that models pre-trained as such can learn rich representations of both modalities, improving their ability to understand how images and text relate to each other. Our experiments show that unimodal pre-training outperforms state-of-the-art CLIP-based models by 6.5% (52.3% $\rightarrow$ 58.8%) on PASCAL-5$^i$ and 6.2% (27.2% $\rightarrow$ 33.4%) on COCO-20$^i$ semantic segmentation under zero-shot setting respectively. By learning representations of both modalities, unimodal pre-training offers broader coverage, reduced misalignment errors, and the ability to capture more complex features and patterns in the real-world data resulting in better performance especially for zero-shot vision-language tasks.
doi_str_mv 10.48550/arxiv.2306.04715
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2306_04715</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2306_04715</sourcerecordid><originalsourceid>FETCH-LOGICAL-a675-ae735875d5e254268a03b436a58afd59d61248ee1f2f588d518027c61d1a64f03</originalsourceid><addsrcrecordid>eNotj8tKAzEYRrNxIdUHcGVeIGNufya60-INBnQxFXQz_JpkDLZJSaZF3962uvo48HHgEHImeKMtAL_A8h23jVTcNFy3Ao7J6yLFm5zrdEUXqW7Wvmxj9W4HcZUdLulz8WwqGFNMIw250MN7D2--ZFY_80RfYo05sQ7TuMHR0x7rVz0hRwGX1Z_-74z0d7f9_IF1T_eP8-uOoWmBoW8V2BYceAlaGotcvWtlECwGB5fOCKmt9yLIANY6EJbL9sMIJ9DowNWMnP9pD23DusQVlp9h3zgcGtUvbYBMbg</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>UniBoost: Unsupervised Unimodal Pre-training for Boosting Zero-shot Vision-Language Tasks</title><source>arXiv.org</source><creator>Sun, Yanan ; Zhong, Zihan ; Fan, Qi ; Tang, Chi-Keung ; Tai, Yu-Wing</creator><creatorcontrib>Sun, Yanan ; Zhong, Zihan ; Fan, Qi ; Tang, Chi-Keung ; Tai, Yu-Wing</creatorcontrib><description>Large-scale joint training of multimodal models, e.g., CLIP, have demonstrated great performance in many vision-language tasks. However, image-text pairs for pre-training are restricted to the intersection of images and texts, limiting their ability to cover a large distribution of real-world data, where noise can also be introduced as misaligned pairs during pre-processing. Conversely, unimodal models trained on text or image data alone through unsupervised techniques can achieve broader coverage of diverse real-world data and are not constrained by the requirement of simultaneous presence of image and text. In this paper, we demonstrate that using large-scale unsupervised unimodal models as pre-training can enhance the zero-shot performance of image-text pair models. Our thorough studies validate that models pre-trained as such can learn rich representations of both modalities, improving their ability to understand how images and text relate to each other. Our experiments show that unimodal pre-training outperforms state-of-the-art CLIP-based models by 6.5% (52.3% $\rightarrow$ 58.8%) on PASCAL-5$^i$ and 6.2% (27.2% $\rightarrow$ 33.4%) on COCO-20$^i$ semantic segmentation under zero-shot setting respectively. By learning representations of both modalities, unimodal pre-training offers broader coverage, reduced misalignment errors, and the ability to capture more complex features and patterns in the real-world data resulting in better performance especially for zero-shot vision-language tasks.</description><identifier>DOI: 10.48550/arxiv.2306.04715</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition</subject><creationdate>2023-06</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2306.04715$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2306.04715$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Sun, Yanan</creatorcontrib><creatorcontrib>Zhong, Zihan</creatorcontrib><creatorcontrib>Fan, Qi</creatorcontrib><creatorcontrib>Tang, Chi-Keung</creatorcontrib><creatorcontrib>Tai, Yu-Wing</creatorcontrib><title>UniBoost: Unsupervised Unimodal Pre-training for Boosting Zero-shot Vision-Language Tasks</title><description>Large-scale joint training of multimodal models, e.g., CLIP, have demonstrated great performance in many vision-language tasks. However, image-text pairs for pre-training are restricted to the intersection of images and texts, limiting their ability to cover a large distribution of real-world data, where noise can also be introduced as misaligned pairs during pre-processing. Conversely, unimodal models trained on text or image data alone through unsupervised techniques can achieve broader coverage of diverse real-world data and are not constrained by the requirement of simultaneous presence of image and text. In this paper, we demonstrate that using large-scale unsupervised unimodal models as pre-training can enhance the zero-shot performance of image-text pair models. Our thorough studies validate that models pre-trained as such can learn rich representations of both modalities, improving their ability to understand how images and text relate to each other. Our experiments show that unimodal pre-training outperforms state-of-the-art CLIP-based models by 6.5% (52.3% $\rightarrow$ 58.8%) on PASCAL-5$^i$ and 6.2% (27.2% $\rightarrow$ 33.4%) on COCO-20$^i$ semantic segmentation under zero-shot setting respectively. By learning representations of both modalities, unimodal pre-training offers broader coverage, reduced misalignment errors, and the ability to capture more complex features and patterns in the real-world data resulting in better performance especially for zero-shot vision-language tasks.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj8tKAzEYRrNxIdUHcGVeIGNufya60-INBnQxFXQz_JpkDLZJSaZF3962uvo48HHgEHImeKMtAL_A8h23jVTcNFy3Ao7J6yLFm5zrdEUXqW7Wvmxj9W4HcZUdLulz8WwqGFNMIw250MN7D2--ZFY_80RfYo05sQ7TuMHR0x7rVz0hRwGX1Z_-74z0d7f9_IF1T_eP8-uOoWmBoW8V2BYceAlaGotcvWtlECwGB5fOCKmt9yLIANY6EJbL9sMIJ9DowNWMnP9pD23DusQVlp9h3zgcGtUvbYBMbg</recordid><startdate>20230607</startdate><enddate>20230607</enddate><creator>Sun, Yanan</creator><creator>Zhong, Zihan</creator><creator>Fan, Qi</creator><creator>Tang, Chi-Keung</creator><creator>Tai, Yu-Wing</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20230607</creationdate><title>UniBoost: Unsupervised Unimodal Pre-training for Boosting Zero-shot Vision-Language Tasks</title><author>Sun, Yanan ; Zhong, Zihan ; Fan, Qi ; Tang, Chi-Keung ; Tai, Yu-Wing</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a675-ae735875d5e254268a03b436a58afd59d61248ee1f2f588d518027c61d1a64f03</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Sun, Yanan</creatorcontrib><creatorcontrib>Zhong, Zihan</creatorcontrib><creatorcontrib>Fan, Qi</creatorcontrib><creatorcontrib>Tang, Chi-Keung</creatorcontrib><creatorcontrib>Tai, Yu-Wing</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Sun, Yanan</au><au>Zhong, Zihan</au><au>Fan, Qi</au><au>Tang, Chi-Keung</au><au>Tai, Yu-Wing</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>UniBoost: Unsupervised Unimodal Pre-training for Boosting Zero-shot Vision-Language Tasks</atitle><date>2023-06-07</date><risdate>2023</risdate><abstract>Large-scale joint training of multimodal models, e.g., CLIP, have demonstrated great performance in many vision-language tasks. However, image-text pairs for pre-training are restricted to the intersection of images and texts, limiting their ability to cover a large distribution of real-world data, where noise can also be introduced as misaligned pairs during pre-processing. Conversely, unimodal models trained on text or image data alone through unsupervised techniques can achieve broader coverage of diverse real-world data and are not constrained by the requirement of simultaneous presence of image and text. In this paper, we demonstrate that using large-scale unsupervised unimodal models as pre-training can enhance the zero-shot performance of image-text pair models. Our thorough studies validate that models pre-trained as such can learn rich representations of both modalities, improving their ability to understand how images and text relate to each other. Our experiments show that unimodal pre-training outperforms state-of-the-art CLIP-based models by 6.5% (52.3% $\rightarrow$ 58.8%) on PASCAL-5$^i$ and 6.2% (27.2% $\rightarrow$ 33.4%) on COCO-20$^i$ semantic segmentation under zero-shot setting respectively. By learning representations of both modalities, unimodal pre-training offers broader coverage, reduced misalignment errors, and the ability to capture more complex features and patterns in the real-world data resulting in better performance especially for zero-shot vision-language tasks.</abstract><doi>10.48550/arxiv.2306.04715</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2306.04715
ispartof
issn
language eng
recordid cdi_arxiv_primary_2306_04715
source arXiv.org
subjects Computer Science - Computer Vision and Pattern Recognition
title UniBoost: Unsupervised Unimodal Pre-training for Boosting Zero-shot Vision-Language Tasks
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-22T07%3A32%3A33IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=UniBoost:%20Unsupervised%20Unimodal%20Pre-training%20for%20Boosting%20Zero-shot%20Vision-Language%20Tasks&rft.au=Sun,%20Yanan&rft.date=2023-06-07&rft_id=info:doi/10.48550/arxiv.2306.04715&rft_dat=%3Carxiv_GOX%3E2306_04715%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true