Benchmarking Robustness of Text-Image Composed Retrieval

Text-image composed retrieval aims to retrieve the target image through the composed query, which is specified in the form of an image plus some text that describes desired modifications to the input image. It has recently attracted attention due to its ability to leverage both information-rich imag...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Sun, Shitong, Gu, Jindong, Gong, Shaogang
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Sun, Shitong
Gu, Jindong
Gong, Shaogang
description Text-image composed retrieval aims to retrieve the target image through the composed query, which is specified in the form of an image plus some text that describes desired modifications to the input image. It has recently attracted attention due to its ability to leverage both information-rich images and concise language to precisely express the requirements for target images. However, the robustness of these approaches against real-world corruptions or further text understanding has never been studied. In this paper, we perform the first robustness study and establish three new diversified benchmarks for systematic analysis of text-image composed retrieval against natural corruptions in both vision and text and further probe textural understanding. For natural corruption analysis, we introduce two new large-scale benchmark datasets, CIRR-C and FashionIQ-C for testing in open domain and fashion domain respectively, both of which apply 15 visual corruptions and 7 textural corruptions. For textural understanding analysis, we introduce a new diagnostic dataset CIRR-D by expanding the original raw data with synthetic data, which contains modified text to better probe textual understanding ability including numerical variation, attribute variation, object removal, background variation, and fine-grained evaluation. The code and benchmark datasets are available at https://github.com/SunTongtongtong/Benchmark-Robustness-Text-Image-Compose-Retrieval.
doi_str_mv 10.48550/arxiv.2311.14837
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2311_14837</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2311_14837</sourcerecordid><originalsourceid>FETCH-LOGICAL-a677-388a379debba4460a513422a04a10087ca1ed76666bb2bfe47e8de1ee18187b3</originalsourceid><addsrcrecordid>eNotj8FugkAURWfTRUP9gK46PwCdxwzMc2mJWhKTJpY9eSMPJQoYBg3-fa3t3ZzdzTlCvIKKDCaJeqdhaq5RrAEiMKjts8AP7naHloZj0-3ltncXP3bsvexrWfA0hnlLe5ZZ3557z5Xc8jg0fKXTi3iq6eR59s9AfK-WRfYZbr7WebbYhJRaG2pE0nZesXNkTKooAW3imJQhUArtjoArm97nXOxqNpaxYmAGBLROB-Lt7_VhXp6H5q56K38LykeB_gFSgECQ</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Benchmarking Robustness of Text-Image Composed Retrieval</title><source>arXiv.org</source><creator>Sun, Shitong ; Gu, Jindong ; Gong, Shaogang</creator><creatorcontrib>Sun, Shitong ; Gu, Jindong ; Gong, Shaogang</creatorcontrib><description>Text-image composed retrieval aims to retrieve the target image through the composed query, which is specified in the form of an image plus some text that describes desired modifications to the input image. It has recently attracted attention due to its ability to leverage both information-rich images and concise language to precisely express the requirements for target images. However, the robustness of these approaches against real-world corruptions or further text understanding has never been studied. In this paper, we perform the first robustness study and establish three new diversified benchmarks for systematic analysis of text-image composed retrieval against natural corruptions in both vision and text and further probe textural understanding. For natural corruption analysis, we introduce two new large-scale benchmark datasets, CIRR-C and FashionIQ-C for testing in open domain and fashion domain respectively, both of which apply 15 visual corruptions and 7 textural corruptions. For textural understanding analysis, we introduce a new diagnostic dataset CIRR-D by expanding the original raw data with synthetic data, which contains modified text to better probe textual understanding ability including numerical variation, attribute variation, object removal, background variation, and fine-grained evaluation. The code and benchmark datasets are available at https://github.com/SunTongtongtong/Benchmark-Robustness-Text-Image-Compose-Retrieval.</description><identifier>DOI: 10.48550/arxiv.2311.14837</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition ; Computer Science - Information Retrieval</subject><creationdate>2023-11</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2311.14837$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2311.14837$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Sun, Shitong</creatorcontrib><creatorcontrib>Gu, Jindong</creatorcontrib><creatorcontrib>Gong, Shaogang</creatorcontrib><title>Benchmarking Robustness of Text-Image Composed Retrieval</title><description>Text-image composed retrieval aims to retrieve the target image through the composed query, which is specified in the form of an image plus some text that describes desired modifications to the input image. It has recently attracted attention due to its ability to leverage both information-rich images and concise language to precisely express the requirements for target images. However, the robustness of these approaches against real-world corruptions or further text understanding has never been studied. In this paper, we perform the first robustness study and establish three new diversified benchmarks for systematic analysis of text-image composed retrieval against natural corruptions in both vision and text and further probe textural understanding. For natural corruption analysis, we introduce two new large-scale benchmark datasets, CIRR-C and FashionIQ-C for testing in open domain and fashion domain respectively, both of which apply 15 visual corruptions and 7 textural corruptions. For textural understanding analysis, we introduce a new diagnostic dataset CIRR-D by expanding the original raw data with synthetic data, which contains modified text to better probe textual understanding ability including numerical variation, attribute variation, object removal, background variation, and fine-grained evaluation. The code and benchmark datasets are available at https://github.com/SunTongtongtong/Benchmark-Robustness-Text-Image-Compose-Retrieval.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><subject>Computer Science - Information Retrieval</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj8FugkAURWfTRUP9gK46PwCdxwzMc2mJWhKTJpY9eSMPJQoYBg3-fa3t3ZzdzTlCvIKKDCaJeqdhaq5RrAEiMKjts8AP7naHloZj0-3ltncXP3bsvexrWfA0hnlLe5ZZ3557z5Xc8jg0fKXTi3iq6eR59s9AfK-WRfYZbr7WebbYhJRaG2pE0nZesXNkTKooAW3imJQhUArtjoArm97nXOxqNpaxYmAGBLROB-Lt7_VhXp6H5q56K38LykeB_gFSgECQ</recordid><startdate>20231124</startdate><enddate>20231124</enddate><creator>Sun, Shitong</creator><creator>Gu, Jindong</creator><creator>Gong, Shaogang</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20231124</creationdate><title>Benchmarking Robustness of Text-Image Composed Retrieval</title><author>Sun, Shitong ; Gu, Jindong ; Gong, Shaogang</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a677-388a379debba4460a513422a04a10087ca1ed76666bb2bfe47e8de1ee18187b3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><topic>Computer Science - Information Retrieval</topic><toplevel>online_resources</toplevel><creatorcontrib>Sun, Shitong</creatorcontrib><creatorcontrib>Gu, Jindong</creatorcontrib><creatorcontrib>Gong, Shaogang</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Sun, Shitong</au><au>Gu, Jindong</au><au>Gong, Shaogang</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Benchmarking Robustness of Text-Image Composed Retrieval</atitle><date>2023-11-24</date><risdate>2023</risdate><abstract>Text-image composed retrieval aims to retrieve the target image through the composed query, which is specified in the form of an image plus some text that describes desired modifications to the input image. It has recently attracted attention due to its ability to leverage both information-rich images and concise language to precisely express the requirements for target images. However, the robustness of these approaches against real-world corruptions or further text understanding has never been studied. In this paper, we perform the first robustness study and establish three new diversified benchmarks for systematic analysis of text-image composed retrieval against natural corruptions in both vision and text and further probe textural understanding. For natural corruption analysis, we introduce two new large-scale benchmark datasets, CIRR-C and FashionIQ-C for testing in open domain and fashion domain respectively, both of which apply 15 visual corruptions and 7 textural corruptions. For textural understanding analysis, we introduce a new diagnostic dataset CIRR-D by expanding the original raw data with synthetic data, which contains modified text to better probe textual understanding ability including numerical variation, attribute variation, object removal, background variation, and fine-grained evaluation. The code and benchmark datasets are available at https://github.com/SunTongtongtong/Benchmark-Robustness-Text-Image-Compose-Retrieval.</abstract><doi>10.48550/arxiv.2311.14837</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2311.14837
ispartof
issn
language eng
recordid cdi_arxiv_primary_2311_14837
source arXiv.org
subjects Computer Science - Computer Vision and Pattern Recognition
Computer Science - Information Retrieval
title Benchmarking Robustness of Text-Image Composed Retrieval
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-03T19%3A39%3A19IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Benchmarking%20Robustness%20of%20Text-Image%20Composed%20Retrieval&rft.au=Sun,%20Shitong&rft.date=2023-11-24&rft_id=info:doi/10.48550/arxiv.2311.14837&rft_dat=%3Carxiv_GOX%3E2311_14837%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true