Three Towers: Flexible Contrastive Learning with Pretrained Image Models

We introduce Three Towers (3T), a flexible method to improve the contrastive learning of vision-language models by incorporating pretrained image classifiers. While contrastive models are usually trained from scratch, LiT (Zhai et al., 2022) has recently shown performance gains from using pretrained...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Kossen, Jannik, Collier, Mark, Mustafa, Basil, Wang, Xiao, Zhai, Xiaohua, Beyer, Lucas, Steiner, Andreas, Berent, Jesse, Jenatton, Rodolphe, Kokiopoulou, Efi
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Kossen, Jannik
Collier, Mark
Mustafa, Basil
Wang, Xiao
Zhai, Xiaohua
Beyer, Lucas
Steiner, Andreas
Berent, Jesse
Jenatton, Rodolphe
Kokiopoulou, Efi
description We introduce Three Towers (3T), a flexible method to improve the contrastive learning of vision-language models by incorporating pretrained image classifiers. While contrastive models are usually trained from scratch, LiT (Zhai et al., 2022) has recently shown performance gains from using pretrained classifier embeddings. However, LiT directly replaces the image tower with the frozen embeddings, excluding any potential benefits from training the image tower contrastively. With 3T, we propose a more flexible strategy that allows the image tower to benefit from both pretrained embeddings and contrastive training. To achieve this, we introduce a third tower that contains the frozen pretrained embeddings, and we encourage alignment between this third tower and the main image-text towers. Empirically, 3T consistently improves over LiT and the CLIP-style from-scratch baseline for retrieval tasks. For classification, 3T reliably improves over the from-scratch baseline, and while it underperforms relative to LiT for JFT-pretrained models, it outperforms LiT for ImageNet-21k and Places365 pretraining.
doi_str_mv 10.48550/arxiv.2305.16999
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2305_16999</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2305_16999</sourcerecordid><originalsourceid>FETCH-LOGICAL-a679-4cace9b53977f5de7e227310b6af378095154f2a26608dbcd79582c41b76dca53</originalsourceid><addsrcrecordid>eNotz71OwzAYhWEvDKhwAUz4BhL8E9sxWxVRWilVO2SPPttfWktpUjlRW-4eKExneKUjPYS8cJYXpVLsDdItXnIhmcq5ttY-knVzTIi0Ga-Ypne66vEWXY-0Goc5wTTHC9IaIQ1xONBrnI90n_CnxAED3ZzggHQ7BuynJ_LQQT_h8_8uSLP6aKp1Vu8-N9WyzkAbmxUePFqnpDWmUwENCmEkZ05DJ03JrOKq6AQIrVkZnA_GqlL4gjujgwclF-T17_ZOac8pniB9tb-k9k6S35FzRn8</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Three Towers: Flexible Contrastive Learning with Pretrained Image Models</title><source>arXiv.org</source><creator>Kossen, Jannik ; Collier, Mark ; Mustafa, Basil ; Wang, Xiao ; Zhai, Xiaohua ; Beyer, Lucas ; Steiner, Andreas ; Berent, Jesse ; Jenatton, Rodolphe ; Kokiopoulou, Efi</creator><creatorcontrib>Kossen, Jannik ; Collier, Mark ; Mustafa, Basil ; Wang, Xiao ; Zhai, Xiaohua ; Beyer, Lucas ; Steiner, Andreas ; Berent, Jesse ; Jenatton, Rodolphe ; Kokiopoulou, Efi</creatorcontrib><description>We introduce Three Towers (3T), a flexible method to improve the contrastive learning of vision-language models by incorporating pretrained image classifiers. While contrastive models are usually trained from scratch, LiT (Zhai et al., 2022) has recently shown performance gains from using pretrained classifier embeddings. However, LiT directly replaces the image tower with the frozen embeddings, excluding any potential benefits from training the image tower contrastively. With 3T, we propose a more flexible strategy that allows the image tower to benefit from both pretrained embeddings and contrastive training. To achieve this, we introduce a third tower that contains the frozen pretrained embeddings, and we encourage alignment between this third tower and the main image-text towers. Empirically, 3T consistently improves over LiT and the CLIP-style from-scratch baseline for retrieval tasks. For classification, 3T reliably improves over the from-scratch baseline, and while it underperforms relative to LiT for JFT-pretrained models, it outperforms LiT for ImageNet-21k and Places365 pretraining.</description><identifier>DOI: 10.48550/arxiv.2305.16999</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Computer Vision and Pattern Recognition ; Computer Science - Learning</subject><creationdate>2023-05</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2305.16999$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2305.16999$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Kossen, Jannik</creatorcontrib><creatorcontrib>Collier, Mark</creatorcontrib><creatorcontrib>Mustafa, Basil</creatorcontrib><creatorcontrib>Wang, Xiao</creatorcontrib><creatorcontrib>Zhai, Xiaohua</creatorcontrib><creatorcontrib>Beyer, Lucas</creatorcontrib><creatorcontrib>Steiner, Andreas</creatorcontrib><creatorcontrib>Berent, Jesse</creatorcontrib><creatorcontrib>Jenatton, Rodolphe</creatorcontrib><creatorcontrib>Kokiopoulou, Efi</creatorcontrib><title>Three Towers: Flexible Contrastive Learning with Pretrained Image Models</title><description>We introduce Three Towers (3T), a flexible method to improve the contrastive learning of vision-language models by incorporating pretrained image classifiers. While contrastive models are usually trained from scratch, LiT (Zhai et al., 2022) has recently shown performance gains from using pretrained classifier embeddings. However, LiT directly replaces the image tower with the frozen embeddings, excluding any potential benefits from training the image tower contrastively. With 3T, we propose a more flexible strategy that allows the image tower to benefit from both pretrained embeddings and contrastive training. To achieve this, we introduce a third tower that contains the frozen pretrained embeddings, and we encourage alignment between this third tower and the main image-text towers. Empirically, 3T consistently improves over LiT and the CLIP-style from-scratch baseline for retrieval tasks. For classification, 3T reliably improves over the from-scratch baseline, and while it underperforms relative to LiT for JFT-pretrained models, it outperforms LiT for ImageNet-21k and Places365 pretraining.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Computer Vision and Pattern Recognition</subject><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotz71OwzAYhWEvDKhwAUz4BhL8E9sxWxVRWilVO2SPPttfWktpUjlRW-4eKExneKUjPYS8cJYXpVLsDdItXnIhmcq5ttY-knVzTIi0Ga-Ypne66vEWXY-0Goc5wTTHC9IaIQ1xONBrnI90n_CnxAED3ZzggHQ7BuynJ_LQQT_h8_8uSLP6aKp1Vu8-N9WyzkAbmxUePFqnpDWmUwENCmEkZ05DJ03JrOKq6AQIrVkZnA_GqlL4gjujgwclF-T17_ZOac8pniB9tb-k9k6S35FzRn8</recordid><startdate>20230526</startdate><enddate>20230526</enddate><creator>Kossen, Jannik</creator><creator>Collier, Mark</creator><creator>Mustafa, Basil</creator><creator>Wang, Xiao</creator><creator>Zhai, Xiaohua</creator><creator>Beyer, Lucas</creator><creator>Steiner, Andreas</creator><creator>Berent, Jesse</creator><creator>Jenatton, Rodolphe</creator><creator>Kokiopoulou, Efi</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20230526</creationdate><title>Three Towers: Flexible Contrastive Learning with Pretrained Image Models</title><author>Kossen, Jannik ; Collier, Mark ; Mustafa, Basil ; Wang, Xiao ; Zhai, Xiaohua ; Beyer, Lucas ; Steiner, Andreas ; Berent, Jesse ; Jenatton, Rodolphe ; Kokiopoulou, Efi</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a679-4cace9b53977f5de7e227310b6af378095154f2a26608dbcd79582c41b76dca53</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Computer Vision and Pattern Recognition</topic><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Kossen, Jannik</creatorcontrib><creatorcontrib>Collier, Mark</creatorcontrib><creatorcontrib>Mustafa, Basil</creatorcontrib><creatorcontrib>Wang, Xiao</creatorcontrib><creatorcontrib>Zhai, Xiaohua</creatorcontrib><creatorcontrib>Beyer, Lucas</creatorcontrib><creatorcontrib>Steiner, Andreas</creatorcontrib><creatorcontrib>Berent, Jesse</creatorcontrib><creatorcontrib>Jenatton, Rodolphe</creatorcontrib><creatorcontrib>Kokiopoulou, Efi</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Kossen, Jannik</au><au>Collier, Mark</au><au>Mustafa, Basil</au><au>Wang, Xiao</au><au>Zhai, Xiaohua</au><au>Beyer, Lucas</au><au>Steiner, Andreas</au><au>Berent, Jesse</au><au>Jenatton, Rodolphe</au><au>Kokiopoulou, Efi</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Three Towers: Flexible Contrastive Learning with Pretrained Image Models</atitle><date>2023-05-26</date><risdate>2023</risdate><abstract>We introduce Three Towers (3T), a flexible method to improve the contrastive learning of vision-language models by incorporating pretrained image classifiers. While contrastive models are usually trained from scratch, LiT (Zhai et al., 2022) has recently shown performance gains from using pretrained classifier embeddings. However, LiT directly replaces the image tower with the frozen embeddings, excluding any potential benefits from training the image tower contrastively. With 3T, we propose a more flexible strategy that allows the image tower to benefit from both pretrained embeddings and contrastive training. To achieve this, we introduce a third tower that contains the frozen pretrained embeddings, and we encourage alignment between this third tower and the main image-text towers. Empirically, 3T consistently improves over LiT and the CLIP-style from-scratch baseline for retrieval tasks. For classification, 3T reliably improves over the from-scratch baseline, and while it underperforms relative to LiT for JFT-pretrained models, it outperforms LiT for ImageNet-21k and Places365 pretraining.</abstract><doi>10.48550/arxiv.2305.16999</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2305.16999
ispartof
issn
language eng
recordid cdi_arxiv_primary_2305_16999
source arXiv.org
subjects Computer Science - Artificial Intelligence
Computer Science - Computer Vision and Pattern Recognition
Computer Science - Learning
title Three Towers: Flexible Contrastive Learning with Pretrained Image Models
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-06T12%3A13%3A15IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Three%20Towers:%20Flexible%20Contrastive%20Learning%20with%20Pretrained%20Image%20Models&rft.au=Kossen,%20Jannik&rft.date=2023-05-26&rft_id=info:doi/10.48550/arxiv.2305.16999&rft_dat=%3Carxiv_GOX%3E2305_16999%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true