Adversarial Metric Attack and Defense for Person Re-identification

Person re-identification (re-ID) has attracted much attention recently due to its great importance in video surveillance. In general, distance metrics used to identify two person images are expected to be robust under various appearance changes. However, our work observes the extreme vulnerability o...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Bai, Song, Li, Yingwei, Zhou, Yuyin, Li, Qizhu, Torr, Philip H. S
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Bai, Song
Li, Yingwei
Zhou, Yuyin
Li, Qizhu
Torr, Philip H. S
description Person re-identification (re-ID) has attracted much attention recently due to its great importance in video surveillance. In general, distance metrics used to identify two person images are expected to be robust under various appearance changes. However, our work observes the extreme vulnerability of existing distance metrics to adversarial examples, generated by simply adding human-imperceptible perturbations to person images. Hence, the security danger is dramatically increased when deploying commercial re-ID systems in video surveillance. Although adversarial examples have been extensively applied for classification analysis, it is rarely studied in metric analysis like person re-identification. The most likely reason is the natural gap between the training and testing of re-ID networks, that is, the predictions of a re-ID network cannot be directly used during testing without an effective metric. In this work, we bridge the gap by proposing Adversarial Metric Attack, a parallel methodology to adversarial classification attacks. Comprehensive experiments clearly reveal the adversarial effects in re-ID systems. Meanwhile, we also present an early attempt of training a metric-preserving network, thereby defending the metric against adversarial attacks. At last, by benchmarking various adversarial settings, we expect that our work can facilitate the development of adversarial attack and defense in metric-based applications.
doi_str_mv 10.48550/arxiv.1901.10650
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_1901_10650</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>1901_10650</sourcerecordid><originalsourceid>FETCH-LOGICAL-a670-946115c7bafac3b0aebe9d7c979a8cda57c2c2155da6f4dce84f9a0f27644d473</originalsourceid><addsrcrecordid>eNotz71OwzAUhmEvDKhwAUz4BhLsxD_xmBYoSEWgqnt0cnwsWW0d5FgV3D20MH3Lq096GLuTolad1uIB8lc81dIJWUthtLhmy96fKM-QIxz4G5UckfelAO45JM8fKVCaiYcp84_fbkp8S1X0lEoMEaHEKd2wqwCHmW7_d8F2z0-71Uu1eV-_rvpNBcaKyikjpUY7QgBsRwE0kvMWnXXQoQdtscFGau3BBOWROhUciNBYo5RXtl2w-7_bC2L4zPEI-Xs4Y4YLpv0BvupFEg</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Adversarial Metric Attack and Defense for Person Re-identification</title><source>arXiv.org</source><creator>Bai, Song ; Li, Yingwei ; Zhou, Yuyin ; Li, Qizhu ; Torr, Philip H. S</creator><creatorcontrib>Bai, Song ; Li, Yingwei ; Zhou, Yuyin ; Li, Qizhu ; Torr, Philip H. S</creatorcontrib><description>Person re-identification (re-ID) has attracted much attention recently due to its great importance in video surveillance. In general, distance metrics used to identify two person images are expected to be robust under various appearance changes. However, our work observes the extreme vulnerability of existing distance metrics to adversarial examples, generated by simply adding human-imperceptible perturbations to person images. Hence, the security danger is dramatically increased when deploying commercial re-ID systems in video surveillance. Although adversarial examples have been extensively applied for classification analysis, it is rarely studied in metric analysis like person re-identification. The most likely reason is the natural gap between the training and testing of re-ID networks, that is, the predictions of a re-ID network cannot be directly used during testing without an effective metric. In this work, we bridge the gap by proposing Adversarial Metric Attack, a parallel methodology to adversarial classification attacks. Comprehensive experiments clearly reveal the adversarial effects in re-ID systems. Meanwhile, we also present an early attempt of training a metric-preserving network, thereby defending the metric against adversarial attacks. At last, by benchmarking various adversarial settings, we expect that our work can facilitate the development of adversarial attack and defense in metric-based applications.</description><identifier>DOI: 10.48550/arxiv.1901.10650</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition</subject><creationdate>2019-01</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/1901.10650$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.1901.10650$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Bai, Song</creatorcontrib><creatorcontrib>Li, Yingwei</creatorcontrib><creatorcontrib>Zhou, Yuyin</creatorcontrib><creatorcontrib>Li, Qizhu</creatorcontrib><creatorcontrib>Torr, Philip H. S</creatorcontrib><title>Adversarial Metric Attack and Defense for Person Re-identification</title><description>Person re-identification (re-ID) has attracted much attention recently due to its great importance in video surveillance. In general, distance metrics used to identify two person images are expected to be robust under various appearance changes. However, our work observes the extreme vulnerability of existing distance metrics to adversarial examples, generated by simply adding human-imperceptible perturbations to person images. Hence, the security danger is dramatically increased when deploying commercial re-ID systems in video surveillance. Although adversarial examples have been extensively applied for classification analysis, it is rarely studied in metric analysis like person re-identification. The most likely reason is the natural gap between the training and testing of re-ID networks, that is, the predictions of a re-ID network cannot be directly used during testing without an effective metric. In this work, we bridge the gap by proposing Adversarial Metric Attack, a parallel methodology to adversarial classification attacks. Comprehensive experiments clearly reveal the adversarial effects in re-ID systems. Meanwhile, we also present an early attempt of training a metric-preserving network, thereby defending the metric against adversarial attacks. At last, by benchmarking various adversarial settings, we expect that our work can facilitate the development of adversarial attack and defense in metric-based applications.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2019</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotz71OwzAUhmEvDKhwAUz4BhLsxD_xmBYoSEWgqnt0cnwsWW0d5FgV3D20MH3Lq096GLuTolad1uIB8lc81dIJWUthtLhmy96fKM-QIxz4G5UckfelAO45JM8fKVCaiYcp84_fbkp8S1X0lEoMEaHEKd2wqwCHmW7_d8F2z0-71Uu1eV-_rvpNBcaKyikjpUY7QgBsRwE0kvMWnXXQoQdtscFGau3BBOWROhUciNBYo5RXtl2w-7_bC2L4zPEI-Xs4Y4YLpv0BvupFEg</recordid><startdate>20190129</startdate><enddate>20190129</enddate><creator>Bai, Song</creator><creator>Li, Yingwei</creator><creator>Zhou, Yuyin</creator><creator>Li, Qizhu</creator><creator>Torr, Philip H. S</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20190129</creationdate><title>Adversarial Metric Attack and Defense for Person Re-identification</title><author>Bai, Song ; Li, Yingwei ; Zhou, Yuyin ; Li, Qizhu ; Torr, Philip H. S</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a670-946115c7bafac3b0aebe9d7c979a8cda57c2c2155da6f4dce84f9a0f27644d473</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2019</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Bai, Song</creatorcontrib><creatorcontrib>Li, Yingwei</creatorcontrib><creatorcontrib>Zhou, Yuyin</creatorcontrib><creatorcontrib>Li, Qizhu</creatorcontrib><creatorcontrib>Torr, Philip H. S</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Bai, Song</au><au>Li, Yingwei</au><au>Zhou, Yuyin</au><au>Li, Qizhu</au><au>Torr, Philip H. S</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Adversarial Metric Attack and Defense for Person Re-identification</atitle><date>2019-01-29</date><risdate>2019</risdate><abstract>Person re-identification (re-ID) has attracted much attention recently due to its great importance in video surveillance. In general, distance metrics used to identify two person images are expected to be robust under various appearance changes. However, our work observes the extreme vulnerability of existing distance metrics to adversarial examples, generated by simply adding human-imperceptible perturbations to person images. Hence, the security danger is dramatically increased when deploying commercial re-ID systems in video surveillance. Although adversarial examples have been extensively applied for classification analysis, it is rarely studied in metric analysis like person re-identification. The most likely reason is the natural gap between the training and testing of re-ID networks, that is, the predictions of a re-ID network cannot be directly used during testing without an effective metric. In this work, we bridge the gap by proposing Adversarial Metric Attack, a parallel methodology to adversarial classification attacks. Comprehensive experiments clearly reveal the adversarial effects in re-ID systems. Meanwhile, we also present an early attempt of training a metric-preserving network, thereby defending the metric against adversarial attacks. At last, by benchmarking various adversarial settings, we expect that our work can facilitate the development of adversarial attack and defense in metric-based applications.</abstract><doi>10.48550/arxiv.1901.10650</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.1901.10650
ispartof
issn
language eng
recordid cdi_arxiv_primary_1901_10650
source arXiv.org
subjects Computer Science - Computer Vision and Pattern Recognition
title Adversarial Metric Attack and Defense for Person Re-identification
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-24T23%3A51%3A12IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Adversarial%20Metric%20Attack%20and%20Defense%20for%20Person%20Re-identification&rft.au=Bai,%20Song&rft.date=2019-01-29&rft_id=info:doi/10.48550/arxiv.1901.10650&rft_dat=%3Carxiv_GOX%3E1901_10650%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true