Identity-Seeking Self-Supervised Representation Learning for Generalizable Person Re-identification

This paper aims to learn a domain-generalizable (DG) person re-identification (ReID) representation from large-scale videos \textbf{without any annotation}. Prior DG ReID methods employ limited labeled data for training due to the high cost of annotation, which restricts further advances. To overcom...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Dou, Zhaopeng, Wang, Zhongdao, Li, Yali, Wang, Shengjin
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Dou, Zhaopeng
Wang, Zhongdao
Li, Yali
Wang, Shengjin
description This paper aims to learn a domain-generalizable (DG) person re-identification (ReID) representation from large-scale videos \textbf{without any annotation}. Prior DG ReID methods employ limited labeled data for training due to the high cost of annotation, which restricts further advances. To overcome the barriers of data and annotation, we propose to utilize large-scale unsupervised data for training. The key issue lies in how to mine identity information. To this end, we propose an Identity-seeking Self-supervised Representation learning (ISR) method. ISR constructs positive pairs from inter-frame images by modeling the instance association as a maximum-weight bipartite matching problem. A reliability-guided contrastive loss is further presented to suppress the adverse impact of noisy positive pairs, ensuring that reliable positive pairs dominate the learning process. The training cost of ISR scales approximately linearly with the data size, making it feasible to utilize large-scale data for training. The learned representation exhibits superior generalization ability. \textbf{Without human annotation and fine-tuning, ISR achieves 87.0\% Rank-1 on Market-1501 and 56.4\% Rank-1 on MSMT17}, outperforming the best supervised domain-generalizable method by 5.0\% and 19.5\%, respectively. In the pre-training$\rightarrow$fine-tuning scenario, ISR achieves state-of-the-art performance, with 88.4\% Rank-1 on MSMT17. The code is at \url{https://github.com/dcp15/ISR_ICCV2023_Oral}.
doi_str_mv 10.48550/arxiv.2308.08887
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2308_08887</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2308_08887</sourcerecordid><originalsourceid>FETCH-LOGICAL-a677-b83533e3ba1be842c76c481ffd04d90d10a01badaf55b83ca1a2f2df365de74b3</originalsourceid><addsrcrecordid>eNotj7FOwzAURb0woMIHMOEfcLBjOzYjqqBUigRqukfP9jOyCGnkhIry9aSh013OPdIh5E7wQlmt-QPkn3QsSsltwa215pr4bcB-StOJNYifqf-gDXaRNd8D5mMaMdAdDhnHGYIpHXpaI-T-zMVDphvsMUOXfsF1SN8xjzOxQ5YWaUx--dyQqwjdiLeXXZH9y_N-_crqt812_VQzqIxhzkotJUoHwqFVpTeVV1bEGLgKjzwIDlw4CBC1nlkPAspYhigrHdAoJ1fk_l-7VLZDTl-QT-25tl1q5R_sSlJ5</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Identity-Seeking Self-Supervised Representation Learning for Generalizable Person Re-identification</title><source>arXiv.org</source><creator>Dou, Zhaopeng ; Wang, Zhongdao ; Li, Yali ; Wang, Shengjin</creator><creatorcontrib>Dou, Zhaopeng ; Wang, Zhongdao ; Li, Yali ; Wang, Shengjin</creatorcontrib><description>This paper aims to learn a domain-generalizable (DG) person re-identification (ReID) representation from large-scale videos \textbf{without any annotation}. Prior DG ReID methods employ limited labeled data for training due to the high cost of annotation, which restricts further advances. To overcome the barriers of data and annotation, we propose to utilize large-scale unsupervised data for training. The key issue lies in how to mine identity information. To this end, we propose an Identity-seeking Self-supervised Representation learning (ISR) method. ISR constructs positive pairs from inter-frame images by modeling the instance association as a maximum-weight bipartite matching problem. A reliability-guided contrastive loss is further presented to suppress the adverse impact of noisy positive pairs, ensuring that reliable positive pairs dominate the learning process. The training cost of ISR scales approximately linearly with the data size, making it feasible to utilize large-scale data for training. The learned representation exhibits superior generalization ability. \textbf{Without human annotation and fine-tuning, ISR achieves 87.0\% Rank-1 on Market-1501 and 56.4\% Rank-1 on MSMT17}, outperforming the best supervised domain-generalizable method by 5.0\% and 19.5\%, respectively. In the pre-training$\rightarrow$fine-tuning scenario, ISR achieves state-of-the-art performance, with 88.4\% Rank-1 on MSMT17. The code is at \url{https://github.com/dcp15/ISR_ICCV2023_Oral}.</description><identifier>DOI: 10.48550/arxiv.2308.08887</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition</subject><creationdate>2023-08</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,777,882</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2308.08887$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2308.08887$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Dou, Zhaopeng</creatorcontrib><creatorcontrib>Wang, Zhongdao</creatorcontrib><creatorcontrib>Li, Yali</creatorcontrib><creatorcontrib>Wang, Shengjin</creatorcontrib><title>Identity-Seeking Self-Supervised Representation Learning for Generalizable Person Re-identification</title><description>This paper aims to learn a domain-generalizable (DG) person re-identification (ReID) representation from large-scale videos \textbf{without any annotation}. Prior DG ReID methods employ limited labeled data for training due to the high cost of annotation, which restricts further advances. To overcome the barriers of data and annotation, we propose to utilize large-scale unsupervised data for training. The key issue lies in how to mine identity information. To this end, we propose an Identity-seeking Self-supervised Representation learning (ISR) method. ISR constructs positive pairs from inter-frame images by modeling the instance association as a maximum-weight bipartite matching problem. A reliability-guided contrastive loss is further presented to suppress the adverse impact of noisy positive pairs, ensuring that reliable positive pairs dominate the learning process. The training cost of ISR scales approximately linearly with the data size, making it feasible to utilize large-scale data for training. The learned representation exhibits superior generalization ability. \textbf{Without human annotation and fine-tuning, ISR achieves 87.0\% Rank-1 on Market-1501 and 56.4\% Rank-1 on MSMT17}, outperforming the best supervised domain-generalizable method by 5.0\% and 19.5\%, respectively. In the pre-training$\rightarrow$fine-tuning scenario, ISR achieves state-of-the-art performance, with 88.4\% Rank-1 on MSMT17. The code is at \url{https://github.com/dcp15/ISR_ICCV2023_Oral}.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj7FOwzAURb0woMIHMOEfcLBjOzYjqqBUigRqukfP9jOyCGnkhIry9aSh013OPdIh5E7wQlmt-QPkn3QsSsltwa215pr4bcB-StOJNYifqf-gDXaRNd8D5mMaMdAdDhnHGYIpHXpaI-T-zMVDphvsMUOXfsF1SN8xjzOxQ5YWaUx--dyQqwjdiLeXXZH9y_N-_crqt812_VQzqIxhzkotJUoHwqFVpTeVV1bEGLgKjzwIDlw4CBC1nlkPAspYhigrHdAoJ1fk_l-7VLZDTl-QT-25tl1q5R_sSlJ5</recordid><startdate>20230817</startdate><enddate>20230817</enddate><creator>Dou, Zhaopeng</creator><creator>Wang, Zhongdao</creator><creator>Li, Yali</creator><creator>Wang, Shengjin</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20230817</creationdate><title>Identity-Seeking Self-Supervised Representation Learning for Generalizable Person Re-identification</title><author>Dou, Zhaopeng ; Wang, Zhongdao ; Li, Yali ; Wang, Shengjin</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a677-b83533e3ba1be842c76c481ffd04d90d10a01badaf55b83ca1a2f2df365de74b3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Dou, Zhaopeng</creatorcontrib><creatorcontrib>Wang, Zhongdao</creatorcontrib><creatorcontrib>Li, Yali</creatorcontrib><creatorcontrib>Wang, Shengjin</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Dou, Zhaopeng</au><au>Wang, Zhongdao</au><au>Li, Yali</au><au>Wang, Shengjin</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Identity-Seeking Self-Supervised Representation Learning for Generalizable Person Re-identification</atitle><date>2023-08-17</date><risdate>2023</risdate><abstract>This paper aims to learn a domain-generalizable (DG) person re-identification (ReID) representation from large-scale videos \textbf{without any annotation}. Prior DG ReID methods employ limited labeled data for training due to the high cost of annotation, which restricts further advances. To overcome the barriers of data and annotation, we propose to utilize large-scale unsupervised data for training. The key issue lies in how to mine identity information. To this end, we propose an Identity-seeking Self-supervised Representation learning (ISR) method. ISR constructs positive pairs from inter-frame images by modeling the instance association as a maximum-weight bipartite matching problem. A reliability-guided contrastive loss is further presented to suppress the adverse impact of noisy positive pairs, ensuring that reliable positive pairs dominate the learning process. The training cost of ISR scales approximately linearly with the data size, making it feasible to utilize large-scale data for training. The learned representation exhibits superior generalization ability. \textbf{Without human annotation and fine-tuning, ISR achieves 87.0\% Rank-1 on Market-1501 and 56.4\% Rank-1 on MSMT17}, outperforming the best supervised domain-generalizable method by 5.0\% and 19.5\%, respectively. In the pre-training$\rightarrow$fine-tuning scenario, ISR achieves state-of-the-art performance, with 88.4\% Rank-1 on MSMT17. The code is at \url{https://github.com/dcp15/ISR_ICCV2023_Oral}.</abstract><doi>10.48550/arxiv.2308.08887</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2308.08887
ispartof
issn
language eng
recordid cdi_arxiv_primary_2308_08887
source arXiv.org
subjects Computer Science - Computer Vision and Pattern Recognition
title Identity-Seeking Self-Supervised Representation Learning for Generalizable Person Re-identification
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-20T19%3A14%3A51IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Identity-Seeking%20Self-Supervised%20Representation%20Learning%20for%20Generalizable%20Person%20Re-identification&rft.au=Dou,%20Zhaopeng&rft.date=2023-08-17&rft_id=info:doi/10.48550/arxiv.2308.08887&rft_dat=%3Carxiv_GOX%3E2308_08887%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true