Learning Domain Invariant Representations for Generalizable Person Re-Identification

Generalizable person Re-Identification (ReID) aims to learn ready-to-use cross-domain representations for direct cross-data evaluation, which has attracted growing attention in the recent computer vision (CV) community. In this work, we construct a structural causal model (SCM) among identity labels...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on image processing 2023-01, Vol.32, p.509-523
Hauptverfasser: Zhang, Yi-Fan, Zhang, Zhang, Li, Da, Jia, Zhen, Wang, Liang, Tan, Tieniu
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 523
container_issue
container_start_page 509
container_title IEEE transactions on image processing
container_volume 32
creator Zhang, Yi-Fan
Zhang, Zhang
Li, Da
Jia, Zhen
Wang, Liang
Tan, Tieniu
description Generalizable person Re-Identification (ReID) aims to learn ready-to-use cross-domain representations for direct cross-data evaluation, which has attracted growing attention in the recent computer vision (CV) community. In this work, we construct a structural causal model (SCM) among identity labels, identity-specific factors (clothing/shoes color etc.), and domain-specific factors (background, viewpoints etc.). According to the causal analysis, we propose a novel Domain Invariant Representation Learning for generalizable person Re-Identification (DIR-ReID) framework. Specifically, we propose to disentangle the identity-specific and domain-specific factors into two independent feature spaces, based on which an effective backdoor adjustment approximate implementation is proposed for serving as a causal intervention towards the SCM. Extensive experiments have been conducted, showing that DIR-ReID outperforms state-of-the-art (SOTA) methods on large-scale domain generalization (DG) ReID benchmarks.
doi_str_mv 10.1109/TIP.2022.3229621
format Article
fullrecord <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_crossref_primary_10_1109_TIP_2022_3229621</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9997549</ieee_id><sourcerecordid>2796161031</sourcerecordid><originalsourceid>FETCH-LOGICAL-c347t-1004aa34571917fb55cb11523aceac819d2e6b20c9f9ac371eca78a561a254cc3</originalsourceid><addsrcrecordid>eNpd0E1LAzEQgOEgit93QZAFL162ZpLNpjlK_SoULFLPy2w6K5FttiZbQX-9qa0ePCWQZ4bwMnYGfADAzfVsPB0ILsRACmFKATvsEEwBOeeF2E13rnSuoTAH7CjGN86hUFDuswOpOSg51IdsNiEM3vnX7LZboPPZ2H9gcOj77JmWgSL5HnvX-Zg1XcgeyFPA1n1h3VI2pRA7n2A-nifnGmd_7Anba7CNdLo9j9nL_d1s9JhPnh7Go5tJbmWh-xzSNxFloTQY0E2tlK0BlJBoCe0QzFxQWQtuTWPQSg1kUQ9RlYBCFdbKY3a12bsM3fuKYl8tXLTUtuipW8VKaFNCCVxCopf_6Fu3Cj79LillUgtpTFJ8o2zoYgzUVMvgFhg-K-DVuniVilfr4tW2eBq52C5e1Qua_w38Jk7gfAMcEf09G2O0Koz8BpwhhD8</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2759387399</pqid></control><display><type>article</type><title>Learning Domain Invariant Representations for Generalizable Person Re-Identification</title><source>IEEE Electronic Library (IEL)</source><creator>Zhang, Yi-Fan ; Zhang, Zhang ; Li, Da ; Jia, Zhen ; Wang, Liang ; Tan, Tieniu</creator><creatorcontrib>Zhang, Yi-Fan ; Zhang, Zhang ; Li, Da ; Jia, Zhen ; Wang, Liang ; Tan, Tieniu</creatorcontrib><description>Generalizable person Re-Identification (ReID) aims to learn ready-to-use cross-domain representations for direct cross-data evaluation, which has attracted growing attention in the recent computer vision (CV) community. In this work, we construct a structural causal model (SCM) among identity labels, identity-specific factors (clothing/shoes color etc.), and domain-specific factors (background, viewpoints etc.). According to the causal analysis, we propose a novel Domain Invariant Representation Learning for generalizable person Re-Identification (DIR-ReID) framework. Specifically, we propose to disentangle the identity-specific and domain-specific factors into two independent feature spaces, based on which an effective backdoor adjustment approximate implementation is proposed for serving as a causal intervention towards the SCM. Extensive experiments have been conducted, showing that DIR-ReID outperforms state-of-the-art (SOTA) methods on large-scale domain generalization (DG) ReID benchmarks.</description><identifier>ISSN: 1057-7149</identifier><identifier>EISSN: 1941-0042</identifier><identifier>DOI: 10.1109/TIP.2022.3229621</identifier><identifier>PMID: 37015387</identifier><identifier>CODEN: IIPRE4</identifier><language>eng</language><publisher>United States: IEEE</publisher><subject>Adaptation models ; Analytical models ; backdoor adjustment ; Computer vision ; Correlation ; Data models ; disentanglement ; Feature extraction ; Footwear ; Generalizable person re-Identification ; Invariants ; Learning ; Representation learning ; Representations ; Training</subject><ispartof>IEEE transactions on image processing, 2023-01, Vol.32, p.509-523</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2023</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c347t-1004aa34571917fb55cb11523aceac819d2e6b20c9f9ac371eca78a561a254cc3</citedby><cites>FETCH-LOGICAL-c347t-1004aa34571917fb55cb11523aceac819d2e6b20c9f9ac371eca78a561a254cc3</cites><orcidid>0000-0001-5224-8647 ; 0000-0002-6810-2279 ; 0000-0001-9425-3065 ; 0000-0001-6822-3989 ; 0000-0002-6227-0183</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9997549$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,796,27923,27924,54757</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/9997549$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/37015387$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Zhang, Yi-Fan</creatorcontrib><creatorcontrib>Zhang, Zhang</creatorcontrib><creatorcontrib>Li, Da</creatorcontrib><creatorcontrib>Jia, Zhen</creatorcontrib><creatorcontrib>Wang, Liang</creatorcontrib><creatorcontrib>Tan, Tieniu</creatorcontrib><title>Learning Domain Invariant Representations for Generalizable Person Re-Identification</title><title>IEEE transactions on image processing</title><addtitle>TIP</addtitle><addtitle>IEEE Trans Image Process</addtitle><description>Generalizable person Re-Identification (ReID) aims to learn ready-to-use cross-domain representations for direct cross-data evaluation, which has attracted growing attention in the recent computer vision (CV) community. In this work, we construct a structural causal model (SCM) among identity labels, identity-specific factors (clothing/shoes color etc.), and domain-specific factors (background, viewpoints etc.). According to the causal analysis, we propose a novel Domain Invariant Representation Learning for generalizable person Re-Identification (DIR-ReID) framework. Specifically, we propose to disentangle the identity-specific and domain-specific factors into two independent feature spaces, based on which an effective backdoor adjustment approximate implementation is proposed for serving as a causal intervention towards the SCM. Extensive experiments have been conducted, showing that DIR-ReID outperforms state-of-the-art (SOTA) methods on large-scale domain generalization (DG) ReID benchmarks.</description><subject>Adaptation models</subject><subject>Analytical models</subject><subject>backdoor adjustment</subject><subject>Computer vision</subject><subject>Correlation</subject><subject>Data models</subject><subject>disentanglement</subject><subject>Feature extraction</subject><subject>Footwear</subject><subject>Generalizable person re-Identification</subject><subject>Invariants</subject><subject>Learning</subject><subject>Representation learning</subject><subject>Representations</subject><subject>Training</subject><issn>1057-7149</issn><issn>1941-0042</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNpd0E1LAzEQgOEgit93QZAFL162ZpLNpjlK_SoULFLPy2w6K5FttiZbQX-9qa0ePCWQZ4bwMnYGfADAzfVsPB0ILsRACmFKATvsEEwBOeeF2E13rnSuoTAH7CjGN86hUFDuswOpOSg51IdsNiEM3vnX7LZboPPZ2H9gcOj77JmWgSL5HnvX-Zg1XcgeyFPA1n1h3VI2pRA7n2A-nifnGmd_7Anba7CNdLo9j9nL_d1s9JhPnh7Go5tJbmWh-xzSNxFloTQY0E2tlK0BlJBoCe0QzFxQWQtuTWPQSg1kUQ9RlYBCFdbKY3a12bsM3fuKYl8tXLTUtuipW8VKaFNCCVxCopf_6Fu3Cj79LillUgtpTFJ8o2zoYgzUVMvgFhg-K-DVuniVilfr4tW2eBq52C5e1Qua_w38Jk7gfAMcEf09G2O0Koz8BpwhhD8</recordid><startdate>20230101</startdate><enddate>20230101</enddate><creator>Zhang, Yi-Fan</creator><creator>Zhang, Zhang</creator><creator>Li, Da</creator><creator>Jia, Zhen</creator><creator>Wang, Liang</creator><creator>Tan, Tieniu</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0001-5224-8647</orcidid><orcidid>https://orcid.org/0000-0002-6810-2279</orcidid><orcidid>https://orcid.org/0000-0001-9425-3065</orcidid><orcidid>https://orcid.org/0000-0001-6822-3989</orcidid><orcidid>https://orcid.org/0000-0002-6227-0183</orcidid></search><sort><creationdate>20230101</creationdate><title>Learning Domain Invariant Representations for Generalizable Person Re-Identification</title><author>Zhang, Yi-Fan ; Zhang, Zhang ; Li, Da ; Jia, Zhen ; Wang, Liang ; Tan, Tieniu</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c347t-1004aa34571917fb55cb11523aceac819d2e6b20c9f9ac371eca78a561a254cc3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Adaptation models</topic><topic>Analytical models</topic><topic>backdoor adjustment</topic><topic>Computer vision</topic><topic>Correlation</topic><topic>Data models</topic><topic>disentanglement</topic><topic>Feature extraction</topic><topic>Footwear</topic><topic>Generalizable person re-Identification</topic><topic>Invariants</topic><topic>Learning</topic><topic>Representation learning</topic><topic>Representations</topic><topic>Training</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Zhang, Yi-Fan</creatorcontrib><creatorcontrib>Zhang, Zhang</creatorcontrib><creatorcontrib>Li, Da</creatorcontrib><creatorcontrib>Jia, Zhen</creatorcontrib><creatorcontrib>Wang, Liang</creatorcontrib><creatorcontrib>Tan, Tieniu</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>MEDLINE - Academic</collection><jtitle>IEEE transactions on image processing</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Zhang, Yi-Fan</au><au>Zhang, Zhang</au><au>Li, Da</au><au>Jia, Zhen</au><au>Wang, Liang</au><au>Tan, Tieniu</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Learning Domain Invariant Representations for Generalizable Person Re-Identification</atitle><jtitle>IEEE transactions on image processing</jtitle><stitle>TIP</stitle><addtitle>IEEE Trans Image Process</addtitle><date>2023-01-01</date><risdate>2023</risdate><volume>32</volume><spage>509</spage><epage>523</epage><pages>509-523</pages><issn>1057-7149</issn><eissn>1941-0042</eissn><coden>IIPRE4</coden><abstract>Generalizable person Re-Identification (ReID) aims to learn ready-to-use cross-domain representations for direct cross-data evaluation, which has attracted growing attention in the recent computer vision (CV) community. In this work, we construct a structural causal model (SCM) among identity labels, identity-specific factors (clothing/shoes color etc.), and domain-specific factors (background, viewpoints etc.). According to the causal analysis, we propose a novel Domain Invariant Representation Learning for generalizable person Re-Identification (DIR-ReID) framework. Specifically, we propose to disentangle the identity-specific and domain-specific factors into two independent feature spaces, based on which an effective backdoor adjustment approximate implementation is proposed for serving as a causal intervention towards the SCM. Extensive experiments have been conducted, showing that DIR-ReID outperforms state-of-the-art (SOTA) methods on large-scale domain generalization (DG) ReID benchmarks.</abstract><cop>United States</cop><pub>IEEE</pub><pmid>37015387</pmid><doi>10.1109/TIP.2022.3229621</doi><tpages>15</tpages><orcidid>https://orcid.org/0000-0001-5224-8647</orcidid><orcidid>https://orcid.org/0000-0002-6810-2279</orcidid><orcidid>https://orcid.org/0000-0001-9425-3065</orcidid><orcidid>https://orcid.org/0000-0001-6822-3989</orcidid><orcidid>https://orcid.org/0000-0002-6227-0183</orcidid></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 1057-7149
ispartof IEEE transactions on image processing, 2023-01, Vol.32, p.509-523
issn 1057-7149
1941-0042
language eng
recordid cdi_crossref_primary_10_1109_TIP_2022_3229621
source IEEE Electronic Library (IEL)
subjects Adaptation models
Analytical models
backdoor adjustment
Computer vision
Correlation
Data models
disentanglement
Feature extraction
Footwear
Generalizable person re-Identification
Invariants
Learning
Representation learning
Representations
Training
title Learning Domain Invariant Representations for Generalizable Person Re-Identification
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-12T06%3A41%3A04IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Learning%20Domain%20Invariant%20Representations%20for%20Generalizable%20Person%20Re-Identification&rft.jtitle=IEEE%20transactions%20on%20image%20processing&rft.au=Zhang,%20Yi-Fan&rft.date=2023-01-01&rft.volume=32&rft.spage=509&rft.epage=523&rft.pages=509-523&rft.issn=1057-7149&rft.eissn=1941-0042&rft.coden=IIPRE4&rft_id=info:doi/10.1109/TIP.2022.3229621&rft_dat=%3Cproquest_RIE%3E2796161031%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2759387399&rft_id=info:pmid/37015387&rft_ieee_id=9997549&rfr_iscdi=true