Unveiling Backbone Effects in CLIP: Exploring Representational Synergies and Variances
Contrastive Language-Image Pretraining (CLIP) stands out as a prominent method for image representation learning. Various neural architectures, spanning Transformer-based models like Vision Transformers (ViTs) to Convolutional Networks (ConvNets) like ResNets, are trained with CLIP and serve as univ...
Gespeichert in:
Veröffentlicht in: | arXiv.org 2023-12 |
---|---|
Hauptverfasser: | , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Rodriguez-Opazo, Cristian Marrese-Taylor, Edison Abbasnejad, Ehsan Damirchi, Hamed Jara, Ignacio M Bravo-Marquez, Felipe van den Hengel, Anton |
description | Contrastive Language-Image Pretraining (CLIP) stands out as a prominent method for image representation learning. Various neural architectures, spanning Transformer-based models like Vision Transformers (ViTs) to Convolutional Networks (ConvNets) like ResNets, are trained with CLIP and serve as universal backbones across diverse vision tasks. Despite utilizing the same data and training objectives, the effectiveness of representations learned by these architectures raises a critical question. Our investigation explores the differences in CLIP performance among these backbone architectures, revealing significant disparities in their classifications. Notably, normalizing these representations results in substantial performance variations. Our findings showcase a remarkable possible synergy between backbone predictions that could reach an improvement of over 20% through informed selection of the appropriate backbone. Moreover, we propose a simple, yet effective approach to combine predictions from multiple backbones, leading to a notable performance boost of up to 6.34\%. We will release the code for reproducing the results. |
format | Article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2905672218</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2905672218</sourcerecordid><originalsourceid>FETCH-proquest_journals_29056722183</originalsourceid><addsrcrecordid>eNqNjssKgkAUQIcgSMp_GGgtjGOWtUyMghbRw61MdpUxuWNzNervM-gDWp3N4XAGzJFB4HvRTMoRc4kqIYScL2QYBg5LL_gEXWss-Vrl96tB4ElRQN4S18jj_e6w4smrqY39OkdoLBBgq1ptUNX89EawpQbiCm88VVYrzIEmbFiomsD9ccymm-Qcb73GmkcH1GaV6WwfoEwuRdjfSD8K_rM-19lA7g</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2905672218</pqid></control><display><type>article</type><title>Unveiling Backbone Effects in CLIP: Exploring Representational Synergies and Variances</title><source>Free E- Journals</source><creator>Rodriguez-Opazo, Cristian ; Marrese-Taylor, Edison ; Abbasnejad, Ehsan ; Damirchi, Hamed ; Jara, Ignacio M ; Bravo-Marquez, Felipe ; van den Hengel, Anton</creator><creatorcontrib>Rodriguez-Opazo, Cristian ; Marrese-Taylor, Edison ; Abbasnejad, Ehsan ; Damirchi, Hamed ; Jara, Ignacio M ; Bravo-Marquez, Felipe ; van den Hengel, Anton</creatorcontrib><description>Contrastive Language-Image Pretraining (CLIP) stands out as a prominent method for image representation learning. Various neural architectures, spanning Transformer-based models like Vision Transformers (ViTs) to Convolutional Networks (ConvNets) like ResNets, are trained with CLIP and serve as universal backbones across diverse vision tasks. Despite utilizing the same data and training objectives, the effectiveness of representations learned by these architectures raises a critical question. Our investigation explores the differences in CLIP performance among these backbone architectures, revealing significant disparities in their classifications. Notably, normalizing these representations results in substantial performance variations. Our findings showcase a remarkable possible synergy between backbone predictions that could reach an improvement of over 20% through informed selection of the appropriate backbone. Moreover, we propose a simple, yet effective approach to combine predictions from multiple backbones, leading to a notable performance boost of up to 6.34\%. We will release the code for reproducing the results.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Representations</subject><ispartof>arXiv.org, 2023-12</ispartof><rights>2023. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>780,784</link.rule.ids></links><search><creatorcontrib>Rodriguez-Opazo, Cristian</creatorcontrib><creatorcontrib>Marrese-Taylor, Edison</creatorcontrib><creatorcontrib>Abbasnejad, Ehsan</creatorcontrib><creatorcontrib>Damirchi, Hamed</creatorcontrib><creatorcontrib>Jara, Ignacio M</creatorcontrib><creatorcontrib>Bravo-Marquez, Felipe</creatorcontrib><creatorcontrib>van den Hengel, Anton</creatorcontrib><title>Unveiling Backbone Effects in CLIP: Exploring Representational Synergies and Variances</title><title>arXiv.org</title><description>Contrastive Language-Image Pretraining (CLIP) stands out as a prominent method for image representation learning. Various neural architectures, spanning Transformer-based models like Vision Transformers (ViTs) to Convolutional Networks (ConvNets) like ResNets, are trained with CLIP and serve as universal backbones across diverse vision tasks. Despite utilizing the same data and training objectives, the effectiveness of representations learned by these architectures raises a critical question. Our investigation explores the differences in CLIP performance among these backbone architectures, revealing significant disparities in their classifications. Notably, normalizing these representations results in substantial performance variations. Our findings showcase a remarkable possible synergy between backbone predictions that could reach an improvement of over 20% through informed selection of the appropriate backbone. Moreover, we propose a simple, yet effective approach to combine predictions from multiple backbones, leading to a notable performance boost of up to 6.34\%. We will release the code for reproducing the results.</description><subject>Representations</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNqNjssKgkAUQIcgSMp_GGgtjGOWtUyMghbRw61MdpUxuWNzNervM-gDWp3N4XAGzJFB4HvRTMoRc4kqIYScL2QYBg5LL_gEXWss-Vrl96tB4ElRQN4S18jj_e6w4smrqY39OkdoLBBgq1ptUNX89EawpQbiCm88VVYrzIEmbFiomsD9ccymm-Qcb73GmkcH1GaV6WwfoEwuRdjfSD8K_rM-19lA7g</recordid><startdate>20231222</startdate><enddate>20231222</enddate><creator>Rodriguez-Opazo, Cristian</creator><creator>Marrese-Taylor, Edison</creator><creator>Abbasnejad, Ehsan</creator><creator>Damirchi, Hamed</creator><creator>Jara, Ignacio M</creator><creator>Bravo-Marquez, Felipe</creator><creator>van den Hengel, Anton</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20231222</creationdate><title>Unveiling Backbone Effects in CLIP: Exploring Representational Synergies and Variances</title><author>Rodriguez-Opazo, Cristian ; Marrese-Taylor, Edison ; Abbasnejad, Ehsan ; Damirchi, Hamed ; Jara, Ignacio M ; Bravo-Marquez, Felipe ; van den Hengel, Anton</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_29056722183</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Representations</topic><toplevel>online_resources</toplevel><creatorcontrib>Rodriguez-Opazo, Cristian</creatorcontrib><creatorcontrib>Marrese-Taylor, Edison</creatorcontrib><creatorcontrib>Abbasnejad, Ehsan</creatorcontrib><creatorcontrib>Damirchi, Hamed</creatorcontrib><creatorcontrib>Jara, Ignacio M</creatorcontrib><creatorcontrib>Bravo-Marquez, Felipe</creatorcontrib><creatorcontrib>van den Hengel, Anton</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Access via ProQuest (Open Access)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Rodriguez-Opazo, Cristian</au><au>Marrese-Taylor, Edison</au><au>Abbasnejad, Ehsan</au><au>Damirchi, Hamed</au><au>Jara, Ignacio M</au><au>Bravo-Marquez, Felipe</au><au>van den Hengel, Anton</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Unveiling Backbone Effects in CLIP: Exploring Representational Synergies and Variances</atitle><jtitle>arXiv.org</jtitle><date>2023-12-22</date><risdate>2023</risdate><eissn>2331-8422</eissn><abstract>Contrastive Language-Image Pretraining (CLIP) stands out as a prominent method for image representation learning. Various neural architectures, spanning Transformer-based models like Vision Transformers (ViTs) to Convolutional Networks (ConvNets) like ResNets, are trained with CLIP and serve as universal backbones across diverse vision tasks. Despite utilizing the same data and training objectives, the effectiveness of representations learned by these architectures raises a critical question. Our investigation explores the differences in CLIP performance among these backbone architectures, revealing significant disparities in their classifications. Notably, normalizing these representations results in substantial performance variations. Our findings showcase a remarkable possible synergy between backbone predictions that could reach an improvement of over 20% through informed selection of the appropriate backbone. Moreover, we propose a simple, yet effective approach to combine predictions from multiple backbones, leading to a notable performance boost of up to 6.34\%. We will release the code for reproducing the results.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2023-12 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_2905672218 |
source | Free E- Journals |
subjects | Representations |
title | Unveiling Backbone Effects in CLIP: Exploring Representational Synergies and Variances |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-03T18%3A05%3A25IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Unveiling%20Backbone%20Effects%20in%20CLIP:%20Exploring%20Representational%20Synergies%20and%20Variances&rft.jtitle=arXiv.org&rft.au=Rodriguez-Opazo,%20Cristian&rft.date=2023-12-22&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2905672218%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2905672218&rft_id=info:pmid/&rfr_iscdi=true |