ContextCLIP: Contextual Alignment of Image-Text pairs on CLIP visual representations
State-of-the-art empirical work has shown that visual representations learned by deep neural networks are robust in nature and capable of performing classification tasks on diverse datasets. For example, CLIP demonstrated zero-shot transfer performance on multiple datasets for classification tasks i...
Gespeichert in:
Veröffentlicht in: | arXiv.org 2022-11 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Grover, Chanda Mastan, Indra Deep Gupta, Debayan |
description | State-of-the-art empirical work has shown that visual representations learned by deep neural networks are robust in nature and capable of performing classification tasks on diverse datasets. For example, CLIP demonstrated zero-shot transfer performance on multiple datasets for classification tasks in a joint embedding space of image and text pairs. However, it showed negative transfer performance on standard datasets, e.g., BirdsNAP, RESISC45, and MNIST. In this paper, we propose ContextCLIP, a contextual and contrastive learning framework for the contextual alignment of image-text pairs by learning robust visual representations on Conceptual Captions dataset. Our framework was observed to improve the image-text alignment by aligning text and image representations contextually in the joint embedding space. ContextCLIP showed good qualitative performance for text-to-image retrieval tasks and enhanced classification accuracy. We evaluated our model quantitatively with zero-shot transfer and fine-tuning experiments on CIFAR-10, CIFAR-100, Birdsnap, RESISC45, and MNIST datasets for classification task. |
doi_str_mv | 10.48550/arxiv.2211.07122 |
format | Article |
fullrecord | <record><control><sourceid>proquest_arxiv</sourceid><recordid>TN_cdi_arxiv_primary_2211_07122</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2736483903</sourcerecordid><originalsourceid>FETCH-LOGICAL-a952-61dcecc80ac527a1cb5842fdeb73363010d4dedf779ba6882fc86a8f5daee4fa3</originalsourceid><addsrcrecordid>eNotj01rg0AQhpdCoSHND-ipCz1rd2ddd9NbkH4IQnvwLqPuBoNRu6sh_ffVJKdhmOcd3oeQJ87CSEvJXtGdm1MIwHnIFAe4IysQggc6AnggG-8PjDGIFUgpViRP-m405zHJ0p83elsmbOmubfbd0XQj7S1Nj7g3QT6f6ICN87Tv6JKgp8YvsDODM36GcWz6zj-Se4utN5vbXJP84z1PvoLs-zNNdlmAWwlBzOvKVJVmWElQyKtSzh1tbUolRCwYZ3VUm9oqtS0x1hpspWPUVtZoTGRRrMnz9e1FuRhcc0T3VyzqxUV9Jl6uxOD638n4sTj0k-vmTgUoEUdabJkQ_wNOXNA</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2736483903</pqid></control><display><type>article</type><title>ContextCLIP: Contextual Alignment of Image-Text pairs on CLIP visual representations</title><source>arXiv.org</source><source>Free E- Journals</source><creator>Grover, Chanda ; Mastan, Indra Deep ; Gupta, Debayan</creator><creatorcontrib>Grover, Chanda ; Mastan, Indra Deep ; Gupta, Debayan</creatorcontrib><description>State-of-the-art empirical work has shown that visual representations learned by deep neural networks are robust in nature and capable of performing classification tasks on diverse datasets. For example, CLIP demonstrated zero-shot transfer performance on multiple datasets for classification tasks in a joint embedding space of image and text pairs. However, it showed negative transfer performance on standard datasets, e.g., BirdsNAP, RESISC45, and MNIST. In this paper, we propose ContextCLIP, a contextual and contrastive learning framework for the contextual alignment of image-text pairs by learning robust visual representations on Conceptual Captions dataset. Our framework was observed to improve the image-text alignment by aligning text and image representations contextually in the joint embedding space. ContextCLIP showed good qualitative performance for text-to-image retrieval tasks and enhanced classification accuracy. We evaluated our model quantitatively with zero-shot transfer and fine-tuning experiments on CIFAR-10, CIFAR-100, Birdsnap, RESISC45, and MNIST datasets for classification task.</description><identifier>EISSN: 2331-8422</identifier><identifier>DOI: 10.48550/arxiv.2211.07122</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Alignment ; Artificial neural networks ; Computer Science - Computer Vision and Pattern Recognition ; Datasets ; Embedding ; Image classification ; Image enhancement ; Machine learning ; Representations ; Retrieval ; Robustness ; Visual observation</subject><ispartof>arXiv.org, 2022-11</ispartof><rights>2022. This work is published under http://creativecommons.org/licenses/by-sa/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><rights>http://creativecommons.org/licenses/by-sa/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,784,885,27924</link.rule.ids><backlink>$$Uhttps://doi.org/10.48550/arXiv.2211.07122$$DView paper in arXiv$$Hfree_for_read</backlink><backlink>$$Uhttps://doi.org/10.1145/3571600.3571653$$DView published paper (Access to full text may be restricted)$$Hfree_for_read</backlink></links><search><creatorcontrib>Grover, Chanda</creatorcontrib><creatorcontrib>Mastan, Indra Deep</creatorcontrib><creatorcontrib>Gupta, Debayan</creatorcontrib><title>ContextCLIP: Contextual Alignment of Image-Text pairs on CLIP visual representations</title><title>arXiv.org</title><description>State-of-the-art empirical work has shown that visual representations learned by deep neural networks are robust in nature and capable of performing classification tasks on diverse datasets. For example, CLIP demonstrated zero-shot transfer performance on multiple datasets for classification tasks in a joint embedding space of image and text pairs. However, it showed negative transfer performance on standard datasets, e.g., BirdsNAP, RESISC45, and MNIST. In this paper, we propose ContextCLIP, a contextual and contrastive learning framework for the contextual alignment of image-text pairs by learning robust visual representations on Conceptual Captions dataset. Our framework was observed to improve the image-text alignment by aligning text and image representations contextually in the joint embedding space. ContextCLIP showed good qualitative performance for text-to-image retrieval tasks and enhanced classification accuracy. We evaluated our model quantitatively with zero-shot transfer and fine-tuning experiments on CIFAR-10, CIFAR-100, Birdsnap, RESISC45, and MNIST datasets for classification task.</description><subject>Alignment</subject><subject>Artificial neural networks</subject><subject>Computer Science - Computer Vision and Pattern Recognition</subject><subject>Datasets</subject><subject>Embedding</subject><subject>Image classification</subject><subject>Image enhancement</subject><subject>Machine learning</subject><subject>Representations</subject><subject>Retrieval</subject><subject>Robustness</subject><subject>Visual observation</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GOX</sourceid><recordid>eNotj01rg0AQhpdCoSHND-ipCz1rd2ddd9NbkH4IQnvwLqPuBoNRu6sh_ffVJKdhmOcd3oeQJ87CSEvJXtGdm1MIwHnIFAe4IysQggc6AnggG-8PjDGIFUgpViRP-m405zHJ0p83elsmbOmubfbd0XQj7S1Nj7g3QT6f6ICN87Tv6JKgp8YvsDODM36GcWz6zj-Se4utN5vbXJP84z1PvoLs-zNNdlmAWwlBzOvKVJVmWElQyKtSzh1tbUolRCwYZ3VUm9oqtS0x1hpspWPUVtZoTGRRrMnz9e1FuRhcc0T3VyzqxUV9Jl6uxOD638n4sTj0k-vmTgUoEUdabJkQ_wNOXNA</recordid><startdate>20221114</startdate><enddate>20221114</enddate><creator>Grover, Chanda</creator><creator>Mastan, Indra Deep</creator><creator>Gupta, Debayan</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20221114</creationdate><title>ContextCLIP: Contextual Alignment of Image-Text pairs on CLIP visual representations</title><author>Grover, Chanda ; Mastan, Indra Deep ; Gupta, Debayan</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a952-61dcecc80ac527a1cb5842fdeb73363010d4dedf779ba6882fc86a8f5daee4fa3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Alignment</topic><topic>Artificial neural networks</topic><topic>Computer Science - Computer Vision and Pattern Recognition</topic><topic>Datasets</topic><topic>Embedding</topic><topic>Image classification</topic><topic>Image enhancement</topic><topic>Machine learning</topic><topic>Representations</topic><topic>Retrieval</topic><topic>Robustness</topic><topic>Visual observation</topic><toplevel>online_resources</toplevel><creatorcontrib>Grover, Chanda</creatorcontrib><creatorcontrib>Mastan, Indra Deep</creatorcontrib><creatorcontrib>Gupta, Debayan</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection><collection>arXiv Computer Science</collection><collection>arXiv.org</collection><jtitle>arXiv.org</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Grover, Chanda</au><au>Mastan, Indra Deep</au><au>Gupta, Debayan</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>ContextCLIP: Contextual Alignment of Image-Text pairs on CLIP visual representations</atitle><jtitle>arXiv.org</jtitle><date>2022-11-14</date><risdate>2022</risdate><eissn>2331-8422</eissn><abstract>State-of-the-art empirical work has shown that visual representations learned by deep neural networks are robust in nature and capable of performing classification tasks on diverse datasets. For example, CLIP demonstrated zero-shot transfer performance on multiple datasets for classification tasks in a joint embedding space of image and text pairs. However, it showed negative transfer performance on standard datasets, e.g., BirdsNAP, RESISC45, and MNIST. In this paper, we propose ContextCLIP, a contextual and contrastive learning framework for the contextual alignment of image-text pairs by learning robust visual representations on Conceptual Captions dataset. Our framework was observed to improve the image-text alignment by aligning text and image representations contextually in the joint embedding space. ContextCLIP showed good qualitative performance for text-to-image retrieval tasks and enhanced classification accuracy. We evaluated our model quantitatively with zero-shot transfer and fine-tuning experiments on CIFAR-10, CIFAR-100, Birdsnap, RESISC45, and MNIST datasets for classification task.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><doi>10.48550/arxiv.2211.07122</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2022-11 |
issn | 2331-8422 |
language | eng |
recordid | cdi_arxiv_primary_2211_07122 |
source | arXiv.org; Free E- Journals |
subjects | Alignment Artificial neural networks Computer Science - Computer Vision and Pattern Recognition Datasets Embedding Image classification Image enhancement Machine learning Representations Retrieval Robustness Visual observation |
title | ContextCLIP: Contextual Alignment of Image-Text pairs on CLIP visual representations |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-12T11%3A13%3A02IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_arxiv&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=ContextCLIP:%20Contextual%20Alignment%20of%20Image-Text%20pairs%20on%20CLIP%20visual%20representations&rft.jtitle=arXiv.org&rft.au=Grover,%20Chanda&rft.date=2022-11-14&rft.eissn=2331-8422&rft_id=info:doi/10.48550/arxiv.2211.07122&rft_dat=%3Cproquest_arxiv%3E2736483903%3C/proquest_arxiv%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2736483903&rft_id=info:pmid/&rfr_iscdi=true |