Beat: Bi-directional One-to-Many Embedding Alignment for Text-based Person Retrieval

Text-based person retrieval (TPR) is a challenging task that involves retrieving a specific individual based on a textual description. Despite considerable efforts to bridge the gap between vision and language, the significant differences between these modalities continue to pose a challenge. Previo...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2024-06
Hauptverfasser: Ma, Yiwei, Sun, Xiaoshuai, Ji, Jiayi, Jiang, Guannan, Zhuang, Weilin, Ji, Rongrong
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Ma, Yiwei
Sun, Xiaoshuai
Ji, Jiayi
Jiang, Guannan
Zhuang, Weilin
Ji, Rongrong
description Text-based person retrieval (TPR) is a challenging task that involves retrieving a specific individual based on a textual description. Despite considerable efforts to bridge the gap between vision and language, the significant differences between these modalities continue to pose a challenge. Previous methods have attempted to align text and image samples in a modal-shared space, but they face uncertainties in optimization directions due to the movable features of both modalities and the failure to account for one-to-many relationships of image-text pairs in TPR datasets. To address this issue, we propose an effective bi-directional one-to-many embedding paradigm that offers a clear optimization direction for each sample, thus mitigating the optimization problem. Additionally, this embedding scheme generates multiple features for each sample without introducing trainable parameters, making it easier to align with several positive samples. Based on this paradigm, we propose a novel Bi-directional one-to-many Embedding Alignment (Beat) model to address the TPR task. Our experimental results demonstrate that the proposed Beat model achieves state-of-the-art performance on three popular TPR datasets, including CUHK-PEDES (65.61 R@1), ICFG-PEDES (58.25 R@1), and RSTPReID (48.10 R@1). Furthermore, additional experiments on MS-COCO, CUB, and Flowers datasets further demonstrate the potential of Beat to be applied to other image-text retrieval tasks.
format Article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_3066579223</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3066579223</sourcerecordid><originalsourceid>FETCH-proquest_journals_30665792233</originalsourceid><addsrcrecordid>eNqNjkELgjAYQEcQJOV_GHQerC21umUYXaII7zLbp0zmVtuM-vd56Ad0epcH701QxDhfkc2asRmKve8opSzNWJLwCJU5iLDDuSJSObgHZY3Q-GKABEvOwnxw0dcgpTIt3mvVmh5MwI11uIR3ILXwIPEVnLcG3yA4BS-hF2jaCO0h_nGOlseiPJzIw9nnAD5UnR3cGPIVp2maZFs2Lv5nfQHviz-a</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3066579223</pqid></control><display><type>article</type><title>Beat: Bi-directional One-to-Many Embedding Alignment for Text-based Person Retrieval</title><source>Free E- Journals</source><creator>Ma, Yiwei ; Sun, Xiaoshuai ; Ji, Jiayi ; Jiang, Guannan ; Zhuang, Weilin ; Ji, Rongrong</creator><creatorcontrib>Ma, Yiwei ; Sun, Xiaoshuai ; Ji, Jiayi ; Jiang, Guannan ; Zhuang, Weilin ; Ji, Rongrong</creatorcontrib><description>Text-based person retrieval (TPR) is a challenging task that involves retrieving a specific individual based on a textual description. Despite considerable efforts to bridge the gap between vision and language, the significant differences between these modalities continue to pose a challenge. Previous methods have attempted to align text and image samples in a modal-shared space, but they face uncertainties in optimization directions due to the movable features of both modalities and the failure to account for one-to-many relationships of image-text pairs in TPR datasets. To address this issue, we propose an effective bi-directional one-to-many embedding paradigm that offers a clear optimization direction for each sample, thus mitigating the optimization problem. Additionally, this embedding scheme generates multiple features for each sample without introducing trainable parameters, making it easier to align with several positive samples. Based on this paradigm, we propose a novel Bi-directional one-to-many Embedding Alignment (Beat) model to address the TPR task. Our experimental results demonstrate that the proposed Beat model achieves state-of-the-art performance on three popular TPR datasets, including CUHK-PEDES (65.61 R@1), ICFG-PEDES (58.25 R@1), and RSTPReID (48.10 R@1). Furthermore, additional experiments on MS-COCO, CUB, and Flowers datasets further demonstrate the potential of Beat to be applied to other image-text retrieval tasks.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Alignment ; Datasets ; Embedding ; Optimization ; Retrieval</subject><ispartof>arXiv.org, 2024-06</ispartof><rights>2024. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>776,780</link.rule.ids></links><search><creatorcontrib>Ma, Yiwei</creatorcontrib><creatorcontrib>Sun, Xiaoshuai</creatorcontrib><creatorcontrib>Ji, Jiayi</creatorcontrib><creatorcontrib>Jiang, Guannan</creatorcontrib><creatorcontrib>Zhuang, Weilin</creatorcontrib><creatorcontrib>Ji, Rongrong</creatorcontrib><title>Beat: Bi-directional One-to-Many Embedding Alignment for Text-based Person Retrieval</title><title>arXiv.org</title><description>Text-based person retrieval (TPR) is a challenging task that involves retrieving a specific individual based on a textual description. Despite considerable efforts to bridge the gap between vision and language, the significant differences between these modalities continue to pose a challenge. Previous methods have attempted to align text and image samples in a modal-shared space, but they face uncertainties in optimization directions due to the movable features of both modalities and the failure to account for one-to-many relationships of image-text pairs in TPR datasets. To address this issue, we propose an effective bi-directional one-to-many embedding paradigm that offers a clear optimization direction for each sample, thus mitigating the optimization problem. Additionally, this embedding scheme generates multiple features for each sample without introducing trainable parameters, making it easier to align with several positive samples. Based on this paradigm, we propose a novel Bi-directional one-to-many Embedding Alignment (Beat) model to address the TPR task. Our experimental results demonstrate that the proposed Beat model achieves state-of-the-art performance on three popular TPR datasets, including CUHK-PEDES (65.61 R@1), ICFG-PEDES (58.25 R@1), and RSTPReID (48.10 R@1). Furthermore, additional experiments on MS-COCO, CUB, and Flowers datasets further demonstrate the potential of Beat to be applied to other image-text retrieval tasks.</description><subject>Alignment</subject><subject>Datasets</subject><subject>Embedding</subject><subject>Optimization</subject><subject>Retrieval</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNqNjkELgjAYQEcQJOV_GHQerC21umUYXaII7zLbp0zmVtuM-vd56Ad0epcH701QxDhfkc2asRmKve8opSzNWJLwCJU5iLDDuSJSObgHZY3Q-GKABEvOwnxw0dcgpTIt3mvVmh5MwI11uIR3ILXwIPEVnLcG3yA4BS-hF2jaCO0h_nGOlseiPJzIw9nnAD5UnR3cGPIVp2maZFs2Lv5nfQHviz-a</recordid><startdate>20240609</startdate><enddate>20240609</enddate><creator>Ma, Yiwei</creator><creator>Sun, Xiaoshuai</creator><creator>Ji, Jiayi</creator><creator>Jiang, Guannan</creator><creator>Zhuang, Weilin</creator><creator>Ji, Rongrong</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20240609</creationdate><title>Beat: Bi-directional One-to-Many Embedding Alignment for Text-based Person Retrieval</title><author>Ma, Yiwei ; Sun, Xiaoshuai ; Ji, Jiayi ; Jiang, Guannan ; Zhuang, Weilin ; Ji, Rongrong</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_30665792233</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Alignment</topic><topic>Datasets</topic><topic>Embedding</topic><topic>Optimization</topic><topic>Retrieval</topic><toplevel>online_resources</toplevel><creatorcontrib>Ma, Yiwei</creatorcontrib><creatorcontrib>Sun, Xiaoshuai</creatorcontrib><creatorcontrib>Ji, Jiayi</creatorcontrib><creatorcontrib>Jiang, Guannan</creatorcontrib><creatorcontrib>Zhuang, Weilin</creatorcontrib><creatorcontrib>Ji, Rongrong</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Ma, Yiwei</au><au>Sun, Xiaoshuai</au><au>Ji, Jiayi</au><au>Jiang, Guannan</au><au>Zhuang, Weilin</au><au>Ji, Rongrong</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Beat: Bi-directional One-to-Many Embedding Alignment for Text-based Person Retrieval</atitle><jtitle>arXiv.org</jtitle><date>2024-06-09</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>Text-based person retrieval (TPR) is a challenging task that involves retrieving a specific individual based on a textual description. Despite considerable efforts to bridge the gap between vision and language, the significant differences between these modalities continue to pose a challenge. Previous methods have attempted to align text and image samples in a modal-shared space, but they face uncertainties in optimization directions due to the movable features of both modalities and the failure to account for one-to-many relationships of image-text pairs in TPR datasets. To address this issue, we propose an effective bi-directional one-to-many embedding paradigm that offers a clear optimization direction for each sample, thus mitigating the optimization problem. Additionally, this embedding scheme generates multiple features for each sample without introducing trainable parameters, making it easier to align with several positive samples. Based on this paradigm, we propose a novel Bi-directional one-to-many Embedding Alignment (Beat) model to address the TPR task. Our experimental results demonstrate that the proposed Beat model achieves state-of-the-art performance on three popular TPR datasets, including CUHK-PEDES (65.61 R@1), ICFG-PEDES (58.25 R@1), and RSTPReID (48.10 R@1). Furthermore, additional experiments on MS-COCO, CUB, and Flowers datasets further demonstrate the potential of Beat to be applied to other image-text retrieval tasks.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2024-06
issn 2331-8422
language eng
recordid cdi_proquest_journals_3066579223
source Free E- Journals
subjects Alignment
Datasets
Embedding
Optimization
Retrieval
title Beat: Bi-directional One-to-Many Embedding Alignment for Text-based Person Retrieval
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-21T14%3A35%3A19IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Beat:%20Bi-directional%20One-to-Many%20Embedding%20Alignment%20for%20Text-based%20Person%20Retrieval&rft.jtitle=arXiv.org&rft.au=Ma,%20Yiwei&rft.date=2024-06-09&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E3066579223%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3066579223&rft_id=info:pmid/&rfr_iscdi=true