Parameter Sharing Exploration and Hetero-Center Triplet Loss for Visible-Thermal Person Re-Identification

This paper focuses on the visible-thermal cross-modality person re-identification (VT Re-ID) task, whose goal is to match person images between the daytime visible modality and the nighttime thermal modality. The two-stream network is usually adopted to address the cross-modality discrepancy, the mo...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on multimedia 2021, Vol.23, p.4414-4425
Hauptverfasser: Liu, Haijun, Tan, Xiaoheng, Zhou, Xichuan
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 4425
container_issue
container_start_page 4414
container_title IEEE transactions on multimedia
container_volume 23
creator Liu, Haijun
Tan, Xiaoheng
Zhou, Xichuan
description This paper focuses on the visible-thermal cross-modality person re-identification (VT Re-ID) task, whose goal is to match person images between the daytime visible modality and the nighttime thermal modality. The two-stream network is usually adopted to address the cross-modality discrepancy, the most challenging problem for VT Re-ID, by learning the multi-modality person features. In this paper, we explore how many parameters a two-stream network should share, which is still not well investigated in the existing literature. By splitting the ResNet50 model to construct the modality-specific feature extraction network and modality-sharing feature embedding network, we experimentally demonstrate the effect of parameter sharing of two-stream network for VT Re-ID. Moreover, in the framework of part-level person feature learning, we propose the hetero-center triplet loss to relax the strict constraint of traditional triplet loss by replacing the comparison of the anchor to all the other samples by the anchor center to all the other centers . With extremely simple means, the proposed method can significantly improve the VT Re-ID performance. The experimental results on two datasets show that our proposed method distinctly outperforms the state-of-the-art methods by large margins, especially on the RegDB dataset achieving superior performance, rank1/mAP/mINP 91.05%/83.28%/68.84%. It can be a new baseline for VT Re-ID, with a simple but effective strategy.
doi_str_mv 10.1109/TMM.2020.3042080
format Article
fullrecord <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_ieee_primary_9276429</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9276429</ieee_id><sourcerecordid>2608556205</sourcerecordid><originalsourceid>FETCH-LOGICAL-c291t-8543a4b68f61005a77cb22355a17ffb939c5918f4a146f107b123665b9f6dcce3</originalsourceid><addsrcrecordid>eNo9kN9LwzAQx4soOKfvgi8FnzMvaX40jzKmG2w4tPoa0i5xGV1bkw70vzd1w6c7uM_3jvskyS2GCcYgH4rVakKAwCQDSiCHs2SEJcUIQIjz2DMCSBIMl8lVCDsATBmIUeLW2uu96Y1P37bau-YznX13det179om1c0mnQ_TFk1NM1CFd11t-nTZhpDa1qcfLriyNqjYGr_Xdbo2PsTkq0GLTYw466q_XdfJhdV1MDenOk7en2bFdI6WL8-L6eMSVUTiHuWMZpqWPLccAzAtRFUSkjGmsbC2lJmsmMS5pRpTbjGIEpOMc1ZKyzdVZbJxcn_c2_n262BCr3btwTfxpCIccsY4ARYpOFKVj494Y1Xn3V77H4VBDUJVFKoGoeokNEbujhFnjPnHJRGcEpn9Al5AcWg</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2608556205</pqid></control><display><type>article</type><title>Parameter Sharing Exploration and Hetero-Center Triplet Loss for Visible-Thermal Person Re-Identification</title><source>IEEE Electronic Library (IEL)</source><creator>Liu, Haijun ; Tan, Xiaoheng ; Zhou, Xichuan</creator><creatorcontrib>Liu, Haijun ; Tan, Xiaoheng ; Zhou, Xichuan</creatorcontrib><description>This paper focuses on the visible-thermal cross-modality person re-identification (VT Re-ID) task, whose goal is to match person images between the daytime visible modality and the nighttime thermal modality. The two-stream network is usually adopted to address the cross-modality discrepancy, the most challenging problem for VT Re-ID, by learning the multi-modality person features. In this paper, we explore how many parameters a two-stream network should share, which is still not well investigated in the existing literature. By splitting the ResNet50 model to construct the modality-specific feature extraction network and modality-sharing feature embedding network, we experimentally demonstrate the effect of parameter sharing of two-stream network for VT Re-ID. Moreover, in the framework of part-level person feature learning, we propose the hetero-center triplet loss to relax the strict constraint of traditional triplet loss by replacing the comparison of the anchor to all the other samples by the anchor center to all the other centers . With extremely simple means, the proposed method can significantly improve the VT Re-ID performance. The experimental results on two datasets show that our proposed method distinctly outperforms the state-of-the-art methods by large margins, especially on the RegDB dataset achieving superior performance, rank1/mAP/mINP 91.05%/83.28%/68.84%. It can be a new baseline for VT Re-ID, with a simple but effective strategy.</description><identifier>ISSN: 1520-9210</identifier><identifier>EISSN: 1941-0077</identifier><identifier>DOI: 10.1109/TMM.2020.3042080</identifier><identifier>CODEN: ITMUF8</identifier><language>eng</language><publisher>Piscataway: IEEE</publisher><subject>Cameras ; Cross-modality discrepancy ; Datasets ; Feature extraction ; Generative adversarial networks ; hetero-center triplet loss ; Loss measurement ; Machine learning ; Measurement ; Parameter identification ; parameters sharing ; Task analysis ; Training data ; visible-thermal person re-identification</subject><ispartof>IEEE transactions on multimedia, 2021, Vol.23, p.4414-4425</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2021</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c291t-8543a4b68f61005a77cb22355a17ffb939c5918f4a146f107b123665b9f6dcce3</citedby><cites>FETCH-LOGICAL-c291t-8543a4b68f61005a77cb22355a17ffb939c5918f4a146f107b123665b9f6dcce3</cites><orcidid>0000-0001-5782-4543 ; 0000-0002-3304-3045</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9276429$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,796,4022,27922,27923,27924,54757</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/9276429$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Liu, Haijun</creatorcontrib><creatorcontrib>Tan, Xiaoheng</creatorcontrib><creatorcontrib>Zhou, Xichuan</creatorcontrib><title>Parameter Sharing Exploration and Hetero-Center Triplet Loss for Visible-Thermal Person Re-Identification</title><title>IEEE transactions on multimedia</title><addtitle>TMM</addtitle><description>This paper focuses on the visible-thermal cross-modality person re-identification (VT Re-ID) task, whose goal is to match person images between the daytime visible modality and the nighttime thermal modality. The two-stream network is usually adopted to address the cross-modality discrepancy, the most challenging problem for VT Re-ID, by learning the multi-modality person features. In this paper, we explore how many parameters a two-stream network should share, which is still not well investigated in the existing literature. By splitting the ResNet50 model to construct the modality-specific feature extraction network and modality-sharing feature embedding network, we experimentally demonstrate the effect of parameter sharing of two-stream network for VT Re-ID. Moreover, in the framework of part-level person feature learning, we propose the hetero-center triplet loss to relax the strict constraint of traditional triplet loss by replacing the comparison of the anchor to all the other samples by the anchor center to all the other centers . With extremely simple means, the proposed method can significantly improve the VT Re-ID performance. The experimental results on two datasets show that our proposed method distinctly outperforms the state-of-the-art methods by large margins, especially on the RegDB dataset achieving superior performance, rank1/mAP/mINP 91.05%/83.28%/68.84%. It can be a new baseline for VT Re-ID, with a simple but effective strategy.</description><subject>Cameras</subject><subject>Cross-modality discrepancy</subject><subject>Datasets</subject><subject>Feature extraction</subject><subject>Generative adversarial networks</subject><subject>hetero-center triplet loss</subject><subject>Loss measurement</subject><subject>Machine learning</subject><subject>Measurement</subject><subject>Parameter identification</subject><subject>parameters sharing</subject><subject>Task analysis</subject><subject>Training data</subject><subject>visible-thermal person re-identification</subject><issn>1520-9210</issn><issn>1941-0077</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNo9kN9LwzAQx4soOKfvgi8FnzMvaX40jzKmG2w4tPoa0i5xGV1bkw70vzd1w6c7uM_3jvskyS2GCcYgH4rVakKAwCQDSiCHs2SEJcUIQIjz2DMCSBIMl8lVCDsATBmIUeLW2uu96Y1P37bau-YznX13det179om1c0mnQ_TFk1NM1CFd11t-nTZhpDa1qcfLriyNqjYGr_Xdbo2PsTkq0GLTYw466q_XdfJhdV1MDenOk7en2bFdI6WL8-L6eMSVUTiHuWMZpqWPLccAzAtRFUSkjGmsbC2lJmsmMS5pRpTbjGIEpOMc1ZKyzdVZbJxcn_c2_n262BCr3btwTfxpCIccsY4ARYpOFKVj494Y1Xn3V77H4VBDUJVFKoGoeokNEbujhFnjPnHJRGcEpn9Al5AcWg</recordid><startdate>2021</startdate><enddate>2021</enddate><creator>Liu, Haijun</creator><creator>Tan, Xiaoheng</creator><creator>Zhou, Xichuan</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><orcidid>https://orcid.org/0000-0001-5782-4543</orcidid><orcidid>https://orcid.org/0000-0002-3304-3045</orcidid></search><sort><creationdate>2021</creationdate><title>Parameter Sharing Exploration and Hetero-Center Triplet Loss for Visible-Thermal Person Re-Identification</title><author>Liu, Haijun ; Tan, Xiaoheng ; Zhou, Xichuan</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c291t-8543a4b68f61005a77cb22355a17ffb939c5918f4a146f107b123665b9f6dcce3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Cameras</topic><topic>Cross-modality discrepancy</topic><topic>Datasets</topic><topic>Feature extraction</topic><topic>Generative adversarial networks</topic><topic>hetero-center triplet loss</topic><topic>Loss measurement</topic><topic>Machine learning</topic><topic>Measurement</topic><topic>Parameter identification</topic><topic>parameters sharing</topic><topic>Task analysis</topic><topic>Training data</topic><topic>visible-thermal person re-identification</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Liu, Haijun</creatorcontrib><creatorcontrib>Tan, Xiaoheng</creatorcontrib><creatorcontrib>Zhou, Xichuan</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>IEEE transactions on multimedia</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Liu, Haijun</au><au>Tan, Xiaoheng</au><au>Zhou, Xichuan</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Parameter Sharing Exploration and Hetero-Center Triplet Loss for Visible-Thermal Person Re-Identification</atitle><jtitle>IEEE transactions on multimedia</jtitle><stitle>TMM</stitle><date>2021</date><risdate>2021</risdate><volume>23</volume><spage>4414</spage><epage>4425</epage><pages>4414-4425</pages><issn>1520-9210</issn><eissn>1941-0077</eissn><coden>ITMUF8</coden><abstract>This paper focuses on the visible-thermal cross-modality person re-identification (VT Re-ID) task, whose goal is to match person images between the daytime visible modality and the nighttime thermal modality. The two-stream network is usually adopted to address the cross-modality discrepancy, the most challenging problem for VT Re-ID, by learning the multi-modality person features. In this paper, we explore how many parameters a two-stream network should share, which is still not well investigated in the existing literature. By splitting the ResNet50 model to construct the modality-specific feature extraction network and modality-sharing feature embedding network, we experimentally demonstrate the effect of parameter sharing of two-stream network for VT Re-ID. Moreover, in the framework of part-level person feature learning, we propose the hetero-center triplet loss to relax the strict constraint of traditional triplet loss by replacing the comparison of the anchor to all the other samples by the anchor center to all the other centers . With extremely simple means, the proposed method can significantly improve the VT Re-ID performance. The experimental results on two datasets show that our proposed method distinctly outperforms the state-of-the-art methods by large margins, especially on the RegDB dataset achieving superior performance, rank1/mAP/mINP 91.05%/83.28%/68.84%. It can be a new baseline for VT Re-ID, with a simple but effective strategy.</abstract><cop>Piscataway</cop><pub>IEEE</pub><doi>10.1109/TMM.2020.3042080</doi><tpages>12</tpages><orcidid>https://orcid.org/0000-0001-5782-4543</orcidid><orcidid>https://orcid.org/0000-0002-3304-3045</orcidid></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 1520-9210
ispartof IEEE transactions on multimedia, 2021, Vol.23, p.4414-4425
issn 1520-9210
1941-0077
language eng
recordid cdi_ieee_primary_9276429
source IEEE Electronic Library (IEL)
subjects Cameras
Cross-modality discrepancy
Datasets
Feature extraction
Generative adversarial networks
hetero-center triplet loss
Loss measurement
Machine learning
Measurement
Parameter identification
parameters sharing
Task analysis
Training data
visible-thermal person re-identification
title Parameter Sharing Exploration and Hetero-Center Triplet Loss for Visible-Thermal Person Re-Identification
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-12T04%3A35%3A30IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Parameter%20Sharing%20Exploration%20and%20Hetero-Center%20Triplet%20Loss%20for%20Visible-Thermal%20Person%20Re-Identification&rft.jtitle=IEEE%20transactions%20on%20multimedia&rft.au=Liu,%20Haijun&rft.date=2021&rft.volume=23&rft.spage=4414&rft.epage=4425&rft.pages=4414-4425&rft.issn=1520-9210&rft.eissn=1941-0077&rft.coden=ITMUF8&rft_id=info:doi/10.1109/TMM.2020.3042080&rft_dat=%3Cproquest_RIE%3E2608556205%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2608556205&rft_id=info:pmid/&rft_ieee_id=9276429&rfr_iscdi=true