Unsupervised Domain Adaptation with Background Shift Mitigating for Person Re-Identification
Unsupervised domain adaptation has been a popular approach for cross-domain person re-identification (re-ID). There are two solutions based on this approach. One solution is to build a model for data transformation across two different domains. Thus, the data in source domain can be transferred to t...
Gespeichert in:
Veröffentlicht in: | International journal of computer vision 2021-07, Vol.129 (7), p.2244-2263 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 2263 |
---|---|
container_issue | 7 |
container_start_page | 2244 |
container_title | International journal of computer vision |
container_volume | 129 |
creator | Huang, Yan Wu, Qiang Xu, Jingsong Zhong, Yi Zhang, Zhaoxiang |
description | Unsupervised domain adaptation has been a popular approach for cross-domain person re-identification (re-ID). There are two solutions based on this approach. One solution is to build a model for data transformation across two different domains. Thus, the data in source domain can be transferred to target domain where re-ID model can be trained by rich source domain data. The other solution is to use target domain data plus corresponding virtual labels to train a re-ID model. Constrains in both solutions are very clear. The first solution heavily relies on the quality of data transformation model. Moreover, the final re-ID model is trained by source domain data but lacks knowledge of the target domain. The second solution in fact mixes target domain data with virtual labels and source domain data with true annotation information. But such a simple mixture does not well consider the raw information gap between data of two domains. This gap can be largely contributed by the background differences between domains. In this paper, a Suppression of Background Shift Generative Adversarial Network (SBSGAN) is proposed to mitigate the gaps of data between two domains. In order to tackle the constraints in the first solution mentioned above, this paper proposes a Densely Associated 2-Stream (DA-2S) network with an update strategy to best learn discriminative ID features from generated data that consider both human body information and also certain useful ID-related cues in the environment. The built re-ID model is further updated using target domain data with corresponding virtual labels. Extensive evaluations on three large benchmark datasets show the effectiveness of the proposed method. |
doi_str_mv | 10.1007/s11263-021-01474-8 |
format | Article |
fullrecord | <record><control><sourceid>gale_proqu</sourceid><recordid>TN_cdi_proquest_journals_2539403504</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><galeid>A666290256</galeid><sourcerecordid>A666290256</sourcerecordid><originalsourceid>FETCH-LOGICAL-c392t-ef153fc4eb105864616e59a49216853a0201dba18d071baf2e333dd725c11f9b3</originalsourceid><addsrcrecordid>eNp9kU1v1DAQhi0EEkvLH-AUiRMHtzN27CTHpRRYqahVP25Ilje2U5euvdgOH_--pkFCvaA5jDR6npmRXkLeIBwhQHecEZnkFBhSwLZraf-MrFB0nGIL4jlZwcCACjngS_Iq5zsAYD3jK_L1JuR5b9MPn61pPsSd9qFZG70vuvgYmp--3Dbv9fhtSnEOprm69a40X3zxUwXC1LiYmgubcmUvLd0YG4p3fny0D8kLp--zff23H5Cbj6fXJ5_p2fmnzcn6jI58YIVah4K7sbVbBNHLVqK0YtDtwFD2gmtggGarsTfQ4VY7ZjnnxnRMjIhu2PID8nbZu0_x-2xzUXdxTqGeVEzwoQUuoK3U0UJN-t4qH1wsSY-1jN35MQbrfJ2vpZRsACZkFd49ESpT7K8y6Tlntbm6fMqyhR1TzDlZp_bJ73T6rRDUn4jUEpGqEanHiFRfJb5IucJhsunf3_-xHgBUOZKR</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2539403504</pqid></control><display><type>article</type><title>Unsupervised Domain Adaptation with Background Shift Mitigating for Person Re-Identification</title><source>Springer Nature - Complete Springer Journals</source><creator>Huang, Yan ; Wu, Qiang ; Xu, Jingsong ; Zhong, Yi ; Zhang, Zhaoxiang</creator><creatorcontrib>Huang, Yan ; Wu, Qiang ; Xu, Jingsong ; Zhong, Yi ; Zhang, Zhaoxiang</creatorcontrib><description>Unsupervised domain adaptation has been a popular approach for cross-domain person re-identification (re-ID). There are two solutions based on this approach. One solution is to build a model for data transformation across two different domains. Thus, the data in source domain can be transferred to target domain where re-ID model can be trained by rich source domain data. The other solution is to use target domain data plus corresponding virtual labels to train a re-ID model. Constrains in both solutions are very clear. The first solution heavily relies on the quality of data transformation model. Moreover, the final re-ID model is trained by source domain data but lacks knowledge of the target domain. The second solution in fact mixes target domain data with virtual labels and source domain data with true annotation information. But such a simple mixture does not well consider the raw information gap between data of two domains. This gap can be largely contributed by the background differences between domains. In this paper, a Suppression of Background Shift Generative Adversarial Network (SBSGAN) is proposed to mitigate the gaps of data between two domains. In order to tackle the constraints in the first solution mentioned above, this paper proposes a Densely Associated 2-Stream (DA-2S) network with an update strategy to best learn discriminative ID features from generated data that consider both human body information and also certain useful ID-related cues in the environment. The built re-ID model is further updated using target domain data with corresponding virtual labels. Extensive evaluations on three large benchmark datasets show the effectiveness of the proposed method.</description><identifier>ISSN: 0920-5691</identifier><identifier>EISSN: 1573-1405</identifier><identifier>DOI: 10.1007/s11263-021-01474-8</identifier><language>eng</language><publisher>New York: Springer US</publisher><subject>Adaptation ; Annotations ; Artificial Intelligence ; Computer Imaging ; Computer Science ; Domains ; Generative adversarial networks ; Image Processing and Computer Vision ; Labels ; Pattern Recognition ; Pattern Recognition and Graphics ; Transformations (mathematics) ; Vision</subject><ispartof>International journal of computer vision, 2021-07, Vol.129 (7), p.2244-2263</ispartof><rights>The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2021</rights><rights>COPYRIGHT 2021 Springer</rights><rights>The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2021.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c392t-ef153fc4eb105864616e59a49216853a0201dba18d071baf2e333dd725c11f9b3</citedby><cites>FETCH-LOGICAL-c392t-ef153fc4eb105864616e59a49216853a0201dba18d071baf2e333dd725c11f9b3</cites><orcidid>0000-0001-5641-2483</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s11263-021-01474-8$$EPDF$$P50$$Gspringer$$H</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1007/s11263-021-01474-8$$EHTML$$P50$$Gspringer$$H</linktohtml><link.rule.ids>314,776,780,27901,27902,41464,42533,51294</link.rule.ids></links><search><creatorcontrib>Huang, Yan</creatorcontrib><creatorcontrib>Wu, Qiang</creatorcontrib><creatorcontrib>Xu, Jingsong</creatorcontrib><creatorcontrib>Zhong, Yi</creatorcontrib><creatorcontrib>Zhang, Zhaoxiang</creatorcontrib><title>Unsupervised Domain Adaptation with Background Shift Mitigating for Person Re-Identification</title><title>International journal of computer vision</title><addtitle>Int J Comput Vis</addtitle><description>Unsupervised domain adaptation has been a popular approach for cross-domain person re-identification (re-ID). There are two solutions based on this approach. One solution is to build a model for data transformation across two different domains. Thus, the data in source domain can be transferred to target domain where re-ID model can be trained by rich source domain data. The other solution is to use target domain data plus corresponding virtual labels to train a re-ID model. Constrains in both solutions are very clear. The first solution heavily relies on the quality of data transformation model. Moreover, the final re-ID model is trained by source domain data but lacks knowledge of the target domain. The second solution in fact mixes target domain data with virtual labels and source domain data with true annotation information. But such a simple mixture does not well consider the raw information gap between data of two domains. This gap can be largely contributed by the background differences between domains. In this paper, a Suppression of Background Shift Generative Adversarial Network (SBSGAN) is proposed to mitigate the gaps of data between two domains. In order to tackle the constraints in the first solution mentioned above, this paper proposes a Densely Associated 2-Stream (DA-2S) network with an update strategy to best learn discriminative ID features from generated data that consider both human body information and also certain useful ID-related cues in the environment. The built re-ID model is further updated using target domain data with corresponding virtual labels. Extensive evaluations on three large benchmark datasets show the effectiveness of the proposed method.</description><subject>Adaptation</subject><subject>Annotations</subject><subject>Artificial Intelligence</subject><subject>Computer Imaging</subject><subject>Computer Science</subject><subject>Domains</subject><subject>Generative adversarial networks</subject><subject>Image Processing and Computer Vision</subject><subject>Labels</subject><subject>Pattern Recognition</subject><subject>Pattern Recognition and Graphics</subject><subject>Transformations (mathematics)</subject><subject>Vision</subject><issn>0920-5691</issn><issn>1573-1405</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>BENPR</sourceid><recordid>eNp9kU1v1DAQhi0EEkvLH-AUiRMHtzN27CTHpRRYqahVP25Ilje2U5euvdgOH_--pkFCvaA5jDR6npmRXkLeIBwhQHecEZnkFBhSwLZraf-MrFB0nGIL4jlZwcCACjngS_Iq5zsAYD3jK_L1JuR5b9MPn61pPsSd9qFZG70vuvgYmp--3Dbv9fhtSnEOprm69a40X3zxUwXC1LiYmgubcmUvLd0YG4p3fny0D8kLp--zff23H5Cbj6fXJ5_p2fmnzcn6jI58YIVah4K7sbVbBNHLVqK0YtDtwFD2gmtggGarsTfQ4VY7ZjnnxnRMjIhu2PID8nbZu0_x-2xzUXdxTqGeVEzwoQUuoK3U0UJN-t4qH1wsSY-1jN35MQbrfJ2vpZRsACZkFd49ESpT7K8y6Tlntbm6fMqyhR1TzDlZp_bJ73T6rRDUn4jUEpGqEanHiFRfJb5IucJhsunf3_-xHgBUOZKR</recordid><startdate>20210701</startdate><enddate>20210701</enddate><creator>Huang, Yan</creator><creator>Wu, Qiang</creator><creator>Xu, Jingsong</creator><creator>Zhong, Yi</creator><creator>Zhang, Zhaoxiang</creator><general>Springer US</general><general>Springer</general><general>Springer Nature B.V</general><scope>AAYXX</scope><scope>CITATION</scope><scope>ISR</scope><scope>3V.</scope><scope>7SC</scope><scope>7WY</scope><scope>7WZ</scope><scope>7XB</scope><scope>87Z</scope><scope>8AL</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>8FK</scope><scope>8FL</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BEZIV</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>FRNLG</scope><scope>F~G</scope><scope>GNUQQ</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K60</scope><scope>K6~</scope><scope>K7-</scope><scope>L.-</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>M0C</scope><scope>M0N</scope><scope>P5Z</scope><scope>P62</scope><scope>PQBIZ</scope><scope>PQBZA</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PYYUZ</scope><scope>Q9U</scope><orcidid>https://orcid.org/0000-0001-5641-2483</orcidid></search><sort><creationdate>20210701</creationdate><title>Unsupervised Domain Adaptation with Background Shift Mitigating for Person Re-Identification</title><author>Huang, Yan ; Wu, Qiang ; Xu, Jingsong ; Zhong, Yi ; Zhang, Zhaoxiang</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c392t-ef153fc4eb105864616e59a49216853a0201dba18d071baf2e333dd725c11f9b3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Adaptation</topic><topic>Annotations</topic><topic>Artificial Intelligence</topic><topic>Computer Imaging</topic><topic>Computer Science</topic><topic>Domains</topic><topic>Generative adversarial networks</topic><topic>Image Processing and Computer Vision</topic><topic>Labels</topic><topic>Pattern Recognition</topic><topic>Pattern Recognition and Graphics</topic><topic>Transformations (mathematics)</topic><topic>Vision</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Huang, Yan</creatorcontrib><creatorcontrib>Wu, Qiang</creatorcontrib><creatorcontrib>Xu, Jingsong</creatorcontrib><creatorcontrib>Zhong, Yi</creatorcontrib><creatorcontrib>Zhang, Zhaoxiang</creatorcontrib><collection>CrossRef</collection><collection>Gale In Context: Science</collection><collection>ProQuest Central (Corporate)</collection><collection>Computer and Information Systems Abstracts</collection><collection>ABI/INFORM Collection</collection><collection>ABI/INFORM Global (PDF only)</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>ABI/INFORM Global (Alumni Edition)</collection><collection>Computing Database (Alumni Edition)</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>ABI/INFORM Collection (Alumni Edition)</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies & Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Business Premium Collection</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>Business Premium Collection (Alumni)</collection><collection>ABI/INFORM Global (Corporate)</collection><collection>ProQuest Central Student</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Computer Science Collection</collection><collection>ProQuest Business Collection (Alumni Edition)</collection><collection>ProQuest Business Collection</collection><collection>Computer Science Database</collection><collection>ABI/INFORM Professional Advanced</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>ABI/INFORM Global</collection><collection>Computing Database</collection><collection>Advanced Technologies & Aerospace Database</collection><collection>ProQuest Advanced Technologies & Aerospace Collection</collection><collection>ProQuest One Business</collection><collection>ProQuest One Business (Alumni)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>ABI/INFORM Collection China</collection><collection>ProQuest Central Basic</collection><jtitle>International journal of computer vision</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Huang, Yan</au><au>Wu, Qiang</au><au>Xu, Jingsong</au><au>Zhong, Yi</au><au>Zhang, Zhaoxiang</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Unsupervised Domain Adaptation with Background Shift Mitigating for Person Re-Identification</atitle><jtitle>International journal of computer vision</jtitle><stitle>Int J Comput Vis</stitle><date>2021-07-01</date><risdate>2021</risdate><volume>129</volume><issue>7</issue><spage>2244</spage><epage>2263</epage><pages>2244-2263</pages><issn>0920-5691</issn><eissn>1573-1405</eissn><abstract>Unsupervised domain adaptation has been a popular approach for cross-domain person re-identification (re-ID). There are two solutions based on this approach. One solution is to build a model for data transformation across two different domains. Thus, the data in source domain can be transferred to target domain where re-ID model can be trained by rich source domain data. The other solution is to use target domain data plus corresponding virtual labels to train a re-ID model. Constrains in both solutions are very clear. The first solution heavily relies on the quality of data transformation model. Moreover, the final re-ID model is trained by source domain data but lacks knowledge of the target domain. The second solution in fact mixes target domain data with virtual labels and source domain data with true annotation information. But such a simple mixture does not well consider the raw information gap between data of two domains. This gap can be largely contributed by the background differences between domains. In this paper, a Suppression of Background Shift Generative Adversarial Network (SBSGAN) is proposed to mitigate the gaps of data between two domains. In order to tackle the constraints in the first solution mentioned above, this paper proposes a Densely Associated 2-Stream (DA-2S) network with an update strategy to best learn discriminative ID features from generated data that consider both human body information and also certain useful ID-related cues in the environment. The built re-ID model is further updated using target domain data with corresponding virtual labels. Extensive evaluations on three large benchmark datasets show the effectiveness of the proposed method.</abstract><cop>New York</cop><pub>Springer US</pub><doi>10.1007/s11263-021-01474-8</doi><tpages>20</tpages><orcidid>https://orcid.org/0000-0001-5641-2483</orcidid></addata></record> |
fulltext | fulltext |
identifier | ISSN: 0920-5691 |
ispartof | International journal of computer vision, 2021-07, Vol.129 (7), p.2244-2263 |
issn | 0920-5691 1573-1405 |
language | eng |
recordid | cdi_proquest_journals_2539403504 |
source | Springer Nature - Complete Springer Journals |
subjects | Adaptation Annotations Artificial Intelligence Computer Imaging Computer Science Domains Generative adversarial networks Image Processing and Computer Vision Labels Pattern Recognition Pattern Recognition and Graphics Transformations (mathematics) Vision |
title | Unsupervised Domain Adaptation with Background Shift Mitigating for Person Re-Identification |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-01T21%3A47%3A36IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-gale_proqu&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Unsupervised%20Domain%20Adaptation%20with%20Background%20Shift%20Mitigating%20for%20Person%20Re-Identification&rft.jtitle=International%20journal%20of%20computer%20vision&rft.au=Huang,%20Yan&rft.date=2021-07-01&rft.volume=129&rft.issue=7&rft.spage=2244&rft.epage=2263&rft.pages=2244-2263&rft.issn=0920-5691&rft.eissn=1573-1405&rft_id=info:doi/10.1007/s11263-021-01474-8&rft_dat=%3Cgale_proqu%3EA666290256%3C/gale_proqu%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2539403504&rft_id=info:pmid/&rft_galeid=A666290256&rfr_iscdi=true |