Semantic-Preserving Augmentation for Robust Image-Text Retrieval

Image text retrieval is a task to search for the proper textual descriptions of the visual world and vice versa. One challenge of this task is the vulnerability to input image and text corruptions. Such corruptions are often unobserved during the training, and degrade the retrieval model decision qu...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Kim, Sunwoo, Shim, Kyuhong, Nguyen, Luong Trung, Shim, Byonghyo
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Kim, Sunwoo
Shim, Kyuhong
Nguyen, Luong Trung
Shim, Byonghyo
description Image text retrieval is a task to search for the proper textual descriptions of the visual world and vice versa. One challenge of this task is the vulnerability to input image and text corruptions. Such corruptions are often unobserved during the training, and degrade the retrieval model decision quality substantially. In this paper, we propose a novel image text retrieval technique, referred to as robust visual semantic embedding (RVSE), which consists of novel image-based and text-based augmentation techniques called semantic preserving augmentation for image (SPAugI) and text (SPAugT). Since SPAugI and SPAugT change the original data in a way that its semantic information is preserved, we enforce the feature extractors to generate semantic aware embedding vectors regardless of the corruption, improving the model robustness significantly. From extensive experiments using benchmark datasets, we show that RVSE outperforms conventional retrieval schemes in terms of image-text retrieval performance.
doi_str_mv 10.48550/arxiv.2303.05692
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2303_05692</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2303_05692</sourcerecordid><originalsourceid>FETCH-LOGICAL-a672-6766720bbdd56a9a4e96c7d7a6ab1963811fd2e95dcbb3e553669c3bf27efdfd3</originalsourceid><addsrcrecordid>eNotz71OwzAUQGEvDKjlAZjwCzg4cX1db60qfipVApXs0XV8HVlqEuS4UXl7oDCd7UgfY_elLFZrreUjpkuci0pJVUgNtrplmw_qccixFe-JJkpzHDq-PXc9DRlzHAcexsSPoztPme977EjUdMn8SDlFmvG0ZDcBTxPd_XfB6uenevcqDm8v-932IBBMJcDAT6Rz3mtAiyuy0BpvENCVFtS6LIOvyGrfOqdIawVgW-VCZSj44NWCPfxtr4TmM8Ue01fzS2muFPUNaU9E1g</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Semantic-Preserving Augmentation for Robust Image-Text Retrieval</title><source>arXiv.org</source><creator>Kim, Sunwoo ; Shim, Kyuhong ; Nguyen, Luong Trung ; Shim, Byonghyo</creator><creatorcontrib>Kim, Sunwoo ; Shim, Kyuhong ; Nguyen, Luong Trung ; Shim, Byonghyo</creatorcontrib><description>Image text retrieval is a task to search for the proper textual descriptions of the visual world and vice versa. One challenge of this task is the vulnerability to input image and text corruptions. Such corruptions are often unobserved during the training, and degrade the retrieval model decision quality substantially. In this paper, we propose a novel image text retrieval technique, referred to as robust visual semantic embedding (RVSE), which consists of novel image-based and text-based augmentation techniques called semantic preserving augmentation for image (SPAugI) and text (SPAugT). Since SPAugI and SPAugT change the original data in a way that its semantic information is preserved, we enforce the feature extractors to generate semantic aware embedding vectors regardless of the corruption, improving the model robustness significantly. From extensive experiments using benchmark datasets, we show that RVSE outperforms conventional retrieval schemes in terms of image-text retrieval performance.</description><identifier>DOI: 10.48550/arxiv.2303.05692</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition</subject><creationdate>2023-03</creationdate><rights>http://creativecommons.org/licenses/by-nc-nd/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2303.05692$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2303.05692$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Kim, Sunwoo</creatorcontrib><creatorcontrib>Shim, Kyuhong</creatorcontrib><creatorcontrib>Nguyen, Luong Trung</creatorcontrib><creatorcontrib>Shim, Byonghyo</creatorcontrib><title>Semantic-Preserving Augmentation for Robust Image-Text Retrieval</title><description>Image text retrieval is a task to search for the proper textual descriptions of the visual world and vice versa. One challenge of this task is the vulnerability to input image and text corruptions. Such corruptions are often unobserved during the training, and degrade the retrieval model decision quality substantially. In this paper, we propose a novel image text retrieval technique, referred to as robust visual semantic embedding (RVSE), which consists of novel image-based and text-based augmentation techniques called semantic preserving augmentation for image (SPAugI) and text (SPAugT). Since SPAugI and SPAugT change the original data in a way that its semantic information is preserved, we enforce the feature extractors to generate semantic aware embedding vectors regardless of the corruption, improving the model robustness significantly. From extensive experiments using benchmark datasets, we show that RVSE outperforms conventional retrieval schemes in terms of image-text retrieval performance.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotz71OwzAUQGEvDKjlAZjwCzg4cX1db60qfipVApXs0XV8HVlqEuS4UXl7oDCd7UgfY_elLFZrreUjpkuci0pJVUgNtrplmw_qccixFe-JJkpzHDq-PXc9DRlzHAcexsSPoztPme977EjUdMn8SDlFmvG0ZDcBTxPd_XfB6uenevcqDm8v-932IBBMJcDAT6Rz3mtAiyuy0BpvENCVFtS6LIOvyGrfOqdIawVgW-VCZSj44NWCPfxtr4TmM8Ue01fzS2muFPUNaU9E1g</recordid><startdate>20230309</startdate><enddate>20230309</enddate><creator>Kim, Sunwoo</creator><creator>Shim, Kyuhong</creator><creator>Nguyen, Luong Trung</creator><creator>Shim, Byonghyo</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20230309</creationdate><title>Semantic-Preserving Augmentation for Robust Image-Text Retrieval</title><author>Kim, Sunwoo ; Shim, Kyuhong ; Nguyen, Luong Trung ; Shim, Byonghyo</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a672-6766720bbdd56a9a4e96c7d7a6ab1963811fd2e95dcbb3e553669c3bf27efdfd3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Kim, Sunwoo</creatorcontrib><creatorcontrib>Shim, Kyuhong</creatorcontrib><creatorcontrib>Nguyen, Luong Trung</creatorcontrib><creatorcontrib>Shim, Byonghyo</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Kim, Sunwoo</au><au>Shim, Kyuhong</au><au>Nguyen, Luong Trung</au><au>Shim, Byonghyo</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Semantic-Preserving Augmentation for Robust Image-Text Retrieval</atitle><date>2023-03-09</date><risdate>2023</risdate><abstract>Image text retrieval is a task to search for the proper textual descriptions of the visual world and vice versa. One challenge of this task is the vulnerability to input image and text corruptions. Such corruptions are often unobserved during the training, and degrade the retrieval model decision quality substantially. In this paper, we propose a novel image text retrieval technique, referred to as robust visual semantic embedding (RVSE), which consists of novel image-based and text-based augmentation techniques called semantic preserving augmentation for image (SPAugI) and text (SPAugT). Since SPAugI and SPAugT change the original data in a way that its semantic information is preserved, we enforce the feature extractors to generate semantic aware embedding vectors regardless of the corruption, improving the model robustness significantly. From extensive experiments using benchmark datasets, we show that RVSE outperforms conventional retrieval schemes in terms of image-text retrieval performance.</abstract><doi>10.48550/arxiv.2303.05692</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2303.05692
ispartof
issn
language eng
recordid cdi_arxiv_primary_2303_05692
source arXiv.org
subjects Computer Science - Computer Vision and Pattern Recognition
title Semantic-Preserving Augmentation for Robust Image-Text Retrieval
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-27T10%3A21%3A15IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Semantic-Preserving%20Augmentation%20for%20Robust%20Image-Text%20Retrieval&rft.au=Kim,%20Sunwoo&rft.date=2023-03-09&rft_id=info:doi/10.48550/arxiv.2303.05692&rft_dat=%3Carxiv_GOX%3E2303_05692%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true