Knowledge Distillation for End-to-End Person Search
We introduce knowledge distillation for end-to-end person search. End-to-End methods are the current state-of-the-art for person search that solve both detection and re-identification jointly. These approaches for joint optimization show their largest drop in performance due to a sub-optimal detecto...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Munjal, Bharti Galasso, Fabio Amin, Sikandar |
description | We introduce knowledge distillation for end-to-end person search. End-to-End
methods are the current state-of-the-art for person search that solve both
detection and re-identification jointly. These approaches for joint
optimization show their largest drop in performance due to a sub-optimal
detector.
We propose two distinct approaches for extra supervision of end-to-end person
search methods in a teacher-student setting. The first is adopted from
state-of-the-art knowledge distillation in object detection. We employ this to
supervise the detector of our person search model at various levels using a
specialized detector. The second approach is new, simple and yet considerably
more effective. This distills knowledge from a teacher re-identification
technique via a pre-computed look-up table of ID features. It relaxes the
learning of identification features and allows the student to focus on the
detection task. This procedure not only helps fixing the sub-optimal detector
training in the joint optimization and simultaneously improving the person
search, but also closes the performance gap between the teacher and the student
for model compression in this case. Overall, we demonstrate significant
improvements for two recent state-of-the-art methods using our proposed
knowledge distillation approach on two benchmark datasets. Moreover, on the
model compression task our approach brings the performance of smaller models on
par with the larger models. |
doi_str_mv | 10.48550/arxiv.1909.01058 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_1909_01058</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>1909_01058</sourcerecordid><originalsourceid>FETCH-LOGICAL-a678-bf01948128e6797aa6fc401578429947d8cbfd3b2735a97d0250b616783c34153</originalsourceid><addsrcrecordid>eNotjsGKwjAURbNxITof4Mr-QOpLkzTJUhxnlBEUdF9em0QDtR3S4sz8vdVxdeDCPRxCZgxSoaWEBcbfcEuZAZMCA6nHhH817U_t7Nkl76HrQ11jH9om8W1M1o2lfUsHJAcXu2E9OozVZUpGHuvOvb04IaeP9Wm1obv953a13FHMlaalB2aEZpl2uTIKMfeVACaVFpkxQlldld7yMlNcolEWMgllzoYrr7hgkk_I_F_7rC6-Y7hi_Cse9cWznt8Bf0I9Mw</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Knowledge Distillation for End-to-End Person Search</title><source>arXiv.org</source><creator>Munjal, Bharti ; Galasso, Fabio ; Amin, Sikandar</creator><creatorcontrib>Munjal, Bharti ; Galasso, Fabio ; Amin, Sikandar</creatorcontrib><description>We introduce knowledge distillation for end-to-end person search. End-to-End
methods are the current state-of-the-art for person search that solve both
detection and re-identification jointly. These approaches for joint
optimization show their largest drop in performance due to a sub-optimal
detector.
We propose two distinct approaches for extra supervision of end-to-end person
search methods in a teacher-student setting. The first is adopted from
state-of-the-art knowledge distillation in object detection. We employ this to
supervise the detector of our person search model at various levels using a
specialized detector. The second approach is new, simple and yet considerably
more effective. This distills knowledge from a teacher re-identification
technique via a pre-computed look-up table of ID features. It relaxes the
learning of identification features and allows the student to focus on the
detection task. This procedure not only helps fixing the sub-optimal detector
training in the joint optimization and simultaneously improving the person
search, but also closes the performance gap between the teacher and the student
for model compression in this case. Overall, we demonstrate significant
improvements for two recent state-of-the-art methods using our proposed
knowledge distillation approach on two benchmark datasets. Moreover, on the
model compression task our approach brings the performance of smaller models on
par with the larger models.</description><identifier>DOI: 10.48550/arxiv.1909.01058</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition</subject><creationdate>2019-09</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/1909.01058$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.1909.01058$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Munjal, Bharti</creatorcontrib><creatorcontrib>Galasso, Fabio</creatorcontrib><creatorcontrib>Amin, Sikandar</creatorcontrib><title>Knowledge Distillation for End-to-End Person Search</title><description>We introduce knowledge distillation for end-to-end person search. End-to-End
methods are the current state-of-the-art for person search that solve both
detection and re-identification jointly. These approaches for joint
optimization show their largest drop in performance due to a sub-optimal
detector.
We propose two distinct approaches for extra supervision of end-to-end person
search methods in a teacher-student setting. The first is adopted from
state-of-the-art knowledge distillation in object detection. We employ this to
supervise the detector of our person search model at various levels using a
specialized detector. The second approach is new, simple and yet considerably
more effective. This distills knowledge from a teacher re-identification
technique via a pre-computed look-up table of ID features. It relaxes the
learning of identification features and allows the student to focus on the
detection task. This procedure not only helps fixing the sub-optimal detector
training in the joint optimization and simultaneously improving the person
search, but also closes the performance gap between the teacher and the student
for model compression in this case. Overall, we demonstrate significant
improvements for two recent state-of-the-art methods using our proposed
knowledge distillation approach on two benchmark datasets. Moreover, on the
model compression task our approach brings the performance of smaller models on
par with the larger models.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2019</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotjsGKwjAURbNxITof4Mr-QOpLkzTJUhxnlBEUdF9em0QDtR3S4sz8vdVxdeDCPRxCZgxSoaWEBcbfcEuZAZMCA6nHhH817U_t7Nkl76HrQ11jH9om8W1M1o2lfUsHJAcXu2E9OozVZUpGHuvOvb04IaeP9Wm1obv953a13FHMlaalB2aEZpl2uTIKMfeVACaVFpkxQlldld7yMlNcolEWMgllzoYrr7hgkk_I_F_7rC6-Y7hi_Cse9cWznt8Bf0I9Mw</recordid><startdate>20190903</startdate><enddate>20190903</enddate><creator>Munjal, Bharti</creator><creator>Galasso, Fabio</creator><creator>Amin, Sikandar</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20190903</creationdate><title>Knowledge Distillation for End-to-End Person Search</title><author>Munjal, Bharti ; Galasso, Fabio ; Amin, Sikandar</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a678-bf01948128e6797aa6fc401578429947d8cbfd3b2735a97d0250b616783c34153</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2019</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Munjal, Bharti</creatorcontrib><creatorcontrib>Galasso, Fabio</creatorcontrib><creatorcontrib>Amin, Sikandar</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Munjal, Bharti</au><au>Galasso, Fabio</au><au>Amin, Sikandar</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Knowledge Distillation for End-to-End Person Search</atitle><date>2019-09-03</date><risdate>2019</risdate><abstract>We introduce knowledge distillation for end-to-end person search. End-to-End
methods are the current state-of-the-art for person search that solve both
detection and re-identification jointly. These approaches for joint
optimization show their largest drop in performance due to a sub-optimal
detector.
We propose two distinct approaches for extra supervision of end-to-end person
search methods in a teacher-student setting. The first is adopted from
state-of-the-art knowledge distillation in object detection. We employ this to
supervise the detector of our person search model at various levels using a
specialized detector. The second approach is new, simple and yet considerably
more effective. This distills knowledge from a teacher re-identification
technique via a pre-computed look-up table of ID features. It relaxes the
learning of identification features and allows the student to focus on the
detection task. This procedure not only helps fixing the sub-optimal detector
training in the joint optimization and simultaneously improving the person
search, but also closes the performance gap between the teacher and the student
for model compression in this case. Overall, we demonstrate significant
improvements for two recent state-of-the-art methods using our proposed
knowledge distillation approach on two benchmark datasets. Moreover, on the
model compression task our approach brings the performance of smaller models on
par with the larger models.</abstract><doi>10.48550/arxiv.1909.01058</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.1909.01058 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_1909_01058 |
source | arXiv.org |
subjects | Computer Science - Computer Vision and Pattern Recognition |
title | Knowledge Distillation for End-to-End Person Search |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-24T11%3A17%3A20IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Knowledge%20Distillation%20for%20End-to-End%20Person%20Search&rft.au=Munjal,%20Bharti&rft.date=2019-09-03&rft_id=info:doi/10.48550/arxiv.1909.01058&rft_dat=%3Carxiv_GOX%3E1909_01058%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |