Knowledge Distillation for 6D Pose Estimation by Aligning Distributions of Local Predictions

Knowledge distillation facilitates the training of a compact student network by using a deep teacher one. While this has achieved great success in many tasks, it remains completely unstudied for image-based 6D object pose estimation. In this work, we introduce the first knowledge distillation method...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Guo, Shuxuan, Hu, Yinlin, Alvarez, Jose M, Salzmann, Mathieu
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Guo, Shuxuan
Hu, Yinlin
Alvarez, Jose M
Salzmann, Mathieu
description Knowledge distillation facilitates the training of a compact student network by using a deep teacher one. While this has achieved great success in many tasks, it remains completely unstudied for image-based 6D object pose estimation. In this work, we introduce the first knowledge distillation method driven by the 6D pose estimation task. To this end, we observe that most modern 6D pose estimation frameworks output local predictions, such as sparse 2D keypoints or dense representations, and that the compact student network typically struggles to predict such local quantities precisely. Therefore, instead of imposing prediction-to-prediction supervision from the teacher to the student, we propose to distill the teacher's \emph{distribution} of local predictions into the student network, facilitating its training. Our experiments on several benchmarks show that our distillation method yields state-of-the-art results with different compact student models and for both keypoint-based and dense prediction-based architectures.
doi_str_mv 10.48550/arxiv.2205.14971
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2205_14971</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2205_14971</sourcerecordid><originalsourceid>FETCH-LOGICAL-a671-a7670ad95c4df5c9a860dbd63cf5e27e64addc7eabb88cc96c570f03631c52643</originalsourceid><addsrcrecordid>eNotj8tqwzAQRbXpoiT9gK6qH7ArWy97GZL0QQ3NIsuCGY0kI1CsIqeP_H0Tp6uBM5fLPYTcV6wUjZTsEfJv-C7rmsmyEq2ubsnH25h-orODo5swHUOMcAxppD5lqjZ0lyZHt2d-uGJzoqsYhjGMw5zPwXxdHhNNnnYJIdJddjbgDJfkxkOc3N3_XZD903a_fim69-fX9aorQOmqAK00A9tKFNZLbKFRzBqrOHrpau2UAGtROzCmaRBbhVIzz7jiFcpaCb4gD9faWa__zOe1-dRfNPtZk_8BtTVOyw</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Knowledge Distillation for 6D Pose Estimation by Aligning Distributions of Local Predictions</title><source>arXiv.org</source><creator>Guo, Shuxuan ; Hu, Yinlin ; Alvarez, Jose M ; Salzmann, Mathieu</creator><creatorcontrib>Guo, Shuxuan ; Hu, Yinlin ; Alvarez, Jose M ; Salzmann, Mathieu</creatorcontrib><description>Knowledge distillation facilitates the training of a compact student network by using a deep teacher one. While this has achieved great success in many tasks, it remains completely unstudied for image-based 6D object pose estimation. In this work, we introduce the first knowledge distillation method driven by the 6D pose estimation task. To this end, we observe that most modern 6D pose estimation frameworks output local predictions, such as sparse 2D keypoints or dense representations, and that the compact student network typically struggles to predict such local quantities precisely. Therefore, instead of imposing prediction-to-prediction supervision from the teacher to the student, we propose to distill the teacher's \emph{distribution} of local predictions into the student network, facilitating its training. Our experiments on several benchmarks show that our distillation method yields state-of-the-art results with different compact student models and for both keypoint-based and dense prediction-based architectures.</description><identifier>DOI: 10.48550/arxiv.2205.14971</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition ; Computer Science - Learning</subject><creationdate>2022-05</creationdate><rights>http://creativecommons.org/licenses/by-nc-nd/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2205.14971$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2205.14971$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Guo, Shuxuan</creatorcontrib><creatorcontrib>Hu, Yinlin</creatorcontrib><creatorcontrib>Alvarez, Jose M</creatorcontrib><creatorcontrib>Salzmann, Mathieu</creatorcontrib><title>Knowledge Distillation for 6D Pose Estimation by Aligning Distributions of Local Predictions</title><description>Knowledge distillation facilitates the training of a compact student network by using a deep teacher one. While this has achieved great success in many tasks, it remains completely unstudied for image-based 6D object pose estimation. In this work, we introduce the first knowledge distillation method driven by the 6D pose estimation task. To this end, we observe that most modern 6D pose estimation frameworks output local predictions, such as sparse 2D keypoints or dense representations, and that the compact student network typically struggles to predict such local quantities precisely. Therefore, instead of imposing prediction-to-prediction supervision from the teacher to the student, we propose to distill the teacher's \emph{distribution} of local predictions into the student network, facilitating its training. Our experiments on several benchmarks show that our distillation method yields state-of-the-art results with different compact student models and for both keypoint-based and dense prediction-based architectures.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj8tqwzAQRbXpoiT9gK6qH7ArWy97GZL0QQ3NIsuCGY0kI1CsIqeP_H0Tp6uBM5fLPYTcV6wUjZTsEfJv-C7rmsmyEq2ubsnH25h-orODo5swHUOMcAxppD5lqjZ0lyZHt2d-uGJzoqsYhjGMw5zPwXxdHhNNnnYJIdJddjbgDJfkxkOc3N3_XZD903a_fim69-fX9aorQOmqAK00A9tKFNZLbKFRzBqrOHrpau2UAGtROzCmaRBbhVIzz7jiFcpaCb4gD9faWa__zOe1-dRfNPtZk_8BtTVOyw</recordid><startdate>20220530</startdate><enddate>20220530</enddate><creator>Guo, Shuxuan</creator><creator>Hu, Yinlin</creator><creator>Alvarez, Jose M</creator><creator>Salzmann, Mathieu</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20220530</creationdate><title>Knowledge Distillation for 6D Pose Estimation by Aligning Distributions of Local Predictions</title><author>Guo, Shuxuan ; Hu, Yinlin ; Alvarez, Jose M ; Salzmann, Mathieu</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a671-a7670ad95c4df5c9a860dbd63cf5e27e64addc7eabb88cc96c570f03631c52643</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Guo, Shuxuan</creatorcontrib><creatorcontrib>Hu, Yinlin</creatorcontrib><creatorcontrib>Alvarez, Jose M</creatorcontrib><creatorcontrib>Salzmann, Mathieu</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Guo, Shuxuan</au><au>Hu, Yinlin</au><au>Alvarez, Jose M</au><au>Salzmann, Mathieu</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Knowledge Distillation for 6D Pose Estimation by Aligning Distributions of Local Predictions</atitle><date>2022-05-30</date><risdate>2022</risdate><abstract>Knowledge distillation facilitates the training of a compact student network by using a deep teacher one. While this has achieved great success in many tasks, it remains completely unstudied for image-based 6D object pose estimation. In this work, we introduce the first knowledge distillation method driven by the 6D pose estimation task. To this end, we observe that most modern 6D pose estimation frameworks output local predictions, such as sparse 2D keypoints or dense representations, and that the compact student network typically struggles to predict such local quantities precisely. Therefore, instead of imposing prediction-to-prediction supervision from the teacher to the student, we propose to distill the teacher's \emph{distribution} of local predictions into the student network, facilitating its training. Our experiments on several benchmarks show that our distillation method yields state-of-the-art results with different compact student models and for both keypoint-based and dense prediction-based architectures.</abstract><doi>10.48550/arxiv.2205.14971</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2205.14971
ispartof
issn
language eng
recordid cdi_arxiv_primary_2205_14971
source arXiv.org
subjects Computer Science - Computer Vision and Pattern Recognition
Computer Science - Learning
title Knowledge Distillation for 6D Pose Estimation by Aligning Distributions of Local Predictions
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-02T22%3A26%3A59IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Knowledge%20Distillation%20for%206D%20Pose%20Estimation%20by%20Aligning%20Distributions%20of%20Local%20Predictions&rft.au=Guo,%20Shuxuan&rft.date=2022-05-30&rft_id=info:doi/10.48550/arxiv.2205.14971&rft_dat=%3Carxiv_GOX%3E2205_14971%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true