Quantifying the Knowledge in a DNN to Explain Knowledge Distillation for Classification

Compared to traditional learning from scratch, knowledge distillation sometimes makes the DNN achieve superior performance. This paper provides a new perspective to explain the success of knowledge distillation, i.e., quantifying knowledge points encoded in intermediate layers of a DNN for classific...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2022-08
Hauptverfasser: Zhang, Quanshi, Xu, Cheng, Chen, Yilan, Rao, Zhefan
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Zhang, Quanshi
Xu, Cheng
Chen, Yilan
Rao, Zhefan
description Compared to traditional learning from scratch, knowledge distillation sometimes makes the DNN achieve superior performance. This paper provides a new perspective to explain the success of knowledge distillation, i.e., quantifying knowledge points encoded in intermediate layers of a DNN for classification, based on the information theory. To this end, we consider the signal processing in a DNN as the layer-wise information discarding. A knowledge point is referred to as an input unit, whose information is much less discarded than other input units. Thus, we propose three hypotheses for knowledge distillation based on the quantification of knowledge points. 1. The DNN learning from knowledge distillation encodes more knowledge points than the DNN learning from scratch. 2. Knowledge distillation makes the DNN more likely to learn different knowledge points simultaneously. In comparison, the DNN learning from scratch tends to encode various knowledge points sequentially. 3. The DNN learning from knowledge distillation is often optimized more stably than the DNN learning from scratch. In order to verify the above hypotheses, we design three types of metrics with annotations of foreground objects to analyze feature representations of the DNN, \textit{i.e.} the quantity and the quality of knowledge points, the learning speed of different knowledge points, and the stability of optimization directions. In experiments, we diagnosed various DNNs for different classification tasks, i.e., image classification, 3D point cloud classification, binary sentiment classification, and question answering, which verified above hypotheses.
format Article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2704123313</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2704123313</sourcerecordid><originalsourceid>FETCH-proquest_journals_27041233133</originalsourceid><addsrcrecordid>eNqNjkELgjAYhkcQJOV_-KCzMDfN7moEgRAEHWXYZp-Mzdyk-vdZBF07vfA8z-GdkYBxHkfbhLEFCZ3rKKVsk7E05QE5H0dhPKonmhb8VcLB2LuWl1YCGhBQVBV4C-Wj12ICP1ug86i18GgNKDtAroVzqLD5oBWZK6GdDL-7JOtdecr3UT_Y2yidrzs7DmZSNctoEr8vcv5f9QIc6kEv</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2704123313</pqid></control><display><type>article</type><title>Quantifying the Knowledge in a DNN to Explain Knowledge Distillation for Classification</title><source>Free E- Journals</source><creator>Zhang, Quanshi ; Xu, Cheng ; Chen, Yilan ; Rao, Zhefan</creator><creatorcontrib>Zhang, Quanshi ; Xu, Cheng ; Chen, Yilan ; Rao, Zhefan</creatorcontrib><description>Compared to traditional learning from scratch, knowledge distillation sometimes makes the DNN achieve superior performance. This paper provides a new perspective to explain the success of knowledge distillation, i.e., quantifying knowledge points encoded in intermediate layers of a DNN for classification, based on the information theory. To this end, we consider the signal processing in a DNN as the layer-wise information discarding. A knowledge point is referred to as an input unit, whose information is much less discarded than other input units. Thus, we propose three hypotheses for knowledge distillation based on the quantification of knowledge points. 1. The DNN learning from knowledge distillation encodes more knowledge points than the DNN learning from scratch. 2. Knowledge distillation makes the DNN more likely to learn different knowledge points simultaneously. In comparison, the DNN learning from scratch tends to encode various knowledge points sequentially. 3. The DNN learning from knowledge distillation is often optimized more stably than the DNN learning from scratch. In order to verify the above hypotheses, we design three types of metrics with annotations of foreground objects to analyze feature representations of the DNN, \textit{i.e.} the quantity and the quality of knowledge points, the learning speed of different knowledge points, and the stability of optimization directions. In experiments, we diagnosed various DNNs for different classification tasks, i.e., image classification, 3D point cloud classification, binary sentiment classification, and question answering, which verified above hypotheses.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Annotations ; Classification ; Distillation ; Hypotheses ; Image classification ; Information theory ; Knowledge ; Learning ; Optimization ; Signal processing ; Three dimensional models</subject><ispartof>arXiv.org, 2022-08</ispartof><rights>2022. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>776,780</link.rule.ids></links><search><creatorcontrib>Zhang, Quanshi</creatorcontrib><creatorcontrib>Xu, Cheng</creatorcontrib><creatorcontrib>Chen, Yilan</creatorcontrib><creatorcontrib>Rao, Zhefan</creatorcontrib><title>Quantifying the Knowledge in a DNN to Explain Knowledge Distillation for Classification</title><title>arXiv.org</title><description>Compared to traditional learning from scratch, knowledge distillation sometimes makes the DNN achieve superior performance. This paper provides a new perspective to explain the success of knowledge distillation, i.e., quantifying knowledge points encoded in intermediate layers of a DNN for classification, based on the information theory. To this end, we consider the signal processing in a DNN as the layer-wise information discarding. A knowledge point is referred to as an input unit, whose information is much less discarded than other input units. Thus, we propose three hypotheses for knowledge distillation based on the quantification of knowledge points. 1. The DNN learning from knowledge distillation encodes more knowledge points than the DNN learning from scratch. 2. Knowledge distillation makes the DNN more likely to learn different knowledge points simultaneously. In comparison, the DNN learning from scratch tends to encode various knowledge points sequentially. 3. The DNN learning from knowledge distillation is often optimized more stably than the DNN learning from scratch. In order to verify the above hypotheses, we design three types of metrics with annotations of foreground objects to analyze feature representations of the DNN, \textit{i.e.} the quantity and the quality of knowledge points, the learning speed of different knowledge points, and the stability of optimization directions. In experiments, we diagnosed various DNNs for different classification tasks, i.e., image classification, 3D point cloud classification, binary sentiment classification, and question answering, which verified above hypotheses.</description><subject>Annotations</subject><subject>Classification</subject><subject>Distillation</subject><subject>Hypotheses</subject><subject>Image classification</subject><subject>Information theory</subject><subject>Knowledge</subject><subject>Learning</subject><subject>Optimization</subject><subject>Signal processing</subject><subject>Three dimensional models</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNqNjkELgjAYhkcQJOV_-KCzMDfN7moEgRAEHWXYZp-Mzdyk-vdZBF07vfA8z-GdkYBxHkfbhLEFCZ3rKKVsk7E05QE5H0dhPKonmhb8VcLB2LuWl1YCGhBQVBV4C-Wj12ICP1ug86i18GgNKDtAroVzqLD5oBWZK6GdDL-7JOtdecr3UT_Y2yidrzs7DmZSNctoEr8vcv5f9QIc6kEv</recordid><startdate>20220818</startdate><enddate>20220818</enddate><creator>Zhang, Quanshi</creator><creator>Xu, Cheng</creator><creator>Chen, Yilan</creator><creator>Rao, Zhefan</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20220818</creationdate><title>Quantifying the Knowledge in a DNN to Explain Knowledge Distillation for Classification</title><author>Zhang, Quanshi ; Xu, Cheng ; Chen, Yilan ; Rao, Zhefan</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_27041233133</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Annotations</topic><topic>Classification</topic><topic>Distillation</topic><topic>Hypotheses</topic><topic>Image classification</topic><topic>Information theory</topic><topic>Knowledge</topic><topic>Learning</topic><topic>Optimization</topic><topic>Signal processing</topic><topic>Three dimensional models</topic><toplevel>online_resources</toplevel><creatorcontrib>Zhang, Quanshi</creatorcontrib><creatorcontrib>Xu, Cheng</creatorcontrib><creatorcontrib>Chen, Yilan</creatorcontrib><creatorcontrib>Rao, Zhefan</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Zhang, Quanshi</au><au>Xu, Cheng</au><au>Chen, Yilan</au><au>Rao, Zhefan</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Quantifying the Knowledge in a DNN to Explain Knowledge Distillation for Classification</atitle><jtitle>arXiv.org</jtitle><date>2022-08-18</date><risdate>2022</risdate><eissn>2331-8422</eissn><abstract>Compared to traditional learning from scratch, knowledge distillation sometimes makes the DNN achieve superior performance. This paper provides a new perspective to explain the success of knowledge distillation, i.e., quantifying knowledge points encoded in intermediate layers of a DNN for classification, based on the information theory. To this end, we consider the signal processing in a DNN as the layer-wise information discarding. A knowledge point is referred to as an input unit, whose information is much less discarded than other input units. Thus, we propose three hypotheses for knowledge distillation based on the quantification of knowledge points. 1. The DNN learning from knowledge distillation encodes more knowledge points than the DNN learning from scratch. 2. Knowledge distillation makes the DNN more likely to learn different knowledge points simultaneously. In comparison, the DNN learning from scratch tends to encode various knowledge points sequentially. 3. The DNN learning from knowledge distillation is often optimized more stably than the DNN learning from scratch. In order to verify the above hypotheses, we design three types of metrics with annotations of foreground objects to analyze feature representations of the DNN, \textit{i.e.} the quantity and the quality of knowledge points, the learning speed of different knowledge points, and the stability of optimization directions. In experiments, we diagnosed various DNNs for different classification tasks, i.e., image classification, 3D point cloud classification, binary sentiment classification, and question answering, which verified above hypotheses.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2022-08
issn 2331-8422
language eng
recordid cdi_proquest_journals_2704123313
source Free E- Journals
subjects Annotations
Classification
Distillation
Hypotheses
Image classification
Information theory
Knowledge
Learning
Optimization
Signal processing
Three dimensional models
title Quantifying the Knowledge in a DNN to Explain Knowledge Distillation for Classification
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-26T07%3A00%3A03IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Quantifying%20the%20Knowledge%20in%20a%20DNN%20to%20Explain%20Knowledge%20Distillation%20for%20Classification&rft.jtitle=arXiv.org&rft.au=Zhang,%20Quanshi&rft.date=2022-08-18&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2704123313%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2704123313&rft_id=info:pmid/&rfr_iscdi=true