Natural Statistics of Network Activations and Implications for Knowledge Distillation

In a matter that is analog to the study of natural image statistics, we study the natural statistics of the deep neural network activations at various layers. As we show, these statistics, similar to image statistics, follow a power law. We also show, both analytically and empirically, that with dep...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Rotman, Michael, Wolf, Lior
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Rotman, Michael
Wolf, Lior
description In a matter that is analog to the study of natural image statistics, we study the natural statistics of the deep neural network activations at various layers. As we show, these statistics, similar to image statistics, follow a power law. We also show, both analytically and empirically, that with depth the exponent of this power law increases at a linear rate. As a direct implication of our discoveries, we present a method for performing Knowledge Distillation (KD). While classical KD methods consider the logits of the teacher network, more recent methods obtain a leap in performance by considering the activation maps. This, however, uses metrics that are suitable for comparing images. We propose to employ two additional loss terms that are based on the spectral properties of the intermediate activation maps. The proposed method obtains state of the art results on multiple image recognition KD benchmarks.
doi_str_mv 10.48550/arxiv.2106.00368
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2106_00368</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2106_00368</sourcerecordid><originalsourceid>FETCH-LOGICAL-a678-755d169c68898b71b96e2beccd8e4f4e484b2f88c57beb18b82cb5c32ef45f223</originalsourceid><addsrcrecordid>eNotj8tOwzAURL1hgQofwAr_QILt2M7NsiqviqosaNeR7VxXFm5SOaaFv4eGrkaaoxnpEHLHWSlBKfZg0nc4loIzXTJWabgm27XJX8lE-pFNDmMObqSDp2vMpyF90rnL4fgHhn6kpu_ocn-IwV0KPyT61g-niN0O6eN5HePEbsiVN3HE20vOyOb5abN4LVbvL8vFfFUYXUNRK9Vx3TgN0ICtuW00CovOdYDSS5QgrfAATtUWLQcLwlnlKoFeKi9ENSP3_7eTV3tIYW_ST3v2aye_6hegKEy3</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Natural Statistics of Network Activations and Implications for Knowledge Distillation</title><source>arXiv.org</source><creator>Rotman, Michael ; Wolf, Lior</creator><creatorcontrib>Rotman, Michael ; Wolf, Lior</creatorcontrib><description>In a matter that is analog to the study of natural image statistics, we study the natural statistics of the deep neural network activations at various layers. As we show, these statistics, similar to image statistics, follow a power law. We also show, both analytically and empirically, that with depth the exponent of this power law increases at a linear rate. As a direct implication of our discoveries, we present a method for performing Knowledge Distillation (KD). While classical KD methods consider the logits of the teacher network, more recent methods obtain a leap in performance by considering the activation maps. This, however, uses metrics that are suitable for comparing images. We propose to employ two additional loss terms that are based on the spectral properties of the intermediate activation maps. The proposed method obtains state of the art results on multiple image recognition KD benchmarks.</description><identifier>DOI: 10.48550/arxiv.2106.00368</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition</subject><creationdate>2021-06</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,778,883</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2106.00368$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2106.00368$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Rotman, Michael</creatorcontrib><creatorcontrib>Wolf, Lior</creatorcontrib><title>Natural Statistics of Network Activations and Implications for Knowledge Distillation</title><description>In a matter that is analog to the study of natural image statistics, we study the natural statistics of the deep neural network activations at various layers. As we show, these statistics, similar to image statistics, follow a power law. We also show, both analytically and empirically, that with depth the exponent of this power law increases at a linear rate. As a direct implication of our discoveries, we present a method for performing Knowledge Distillation (KD). While classical KD methods consider the logits of the teacher network, more recent methods obtain a leap in performance by considering the activation maps. This, however, uses metrics that are suitable for comparing images. We propose to employ two additional loss terms that are based on the spectral properties of the intermediate activation maps. The proposed method obtains state of the art results on multiple image recognition KD benchmarks.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj8tOwzAURL1hgQofwAr_QILt2M7NsiqviqosaNeR7VxXFm5SOaaFv4eGrkaaoxnpEHLHWSlBKfZg0nc4loIzXTJWabgm27XJX8lE-pFNDmMObqSDp2vMpyF90rnL4fgHhn6kpu_ocn-IwV0KPyT61g-niN0O6eN5HePEbsiVN3HE20vOyOb5abN4LVbvL8vFfFUYXUNRK9Vx3TgN0ICtuW00CovOdYDSS5QgrfAATtUWLQcLwlnlKoFeKi9ENSP3_7eTV3tIYW_ST3v2aye_6hegKEy3</recordid><startdate>20210601</startdate><enddate>20210601</enddate><creator>Rotman, Michael</creator><creator>Wolf, Lior</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20210601</creationdate><title>Natural Statistics of Network Activations and Implications for Knowledge Distillation</title><author>Rotman, Michael ; Wolf, Lior</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a678-755d169c68898b71b96e2beccd8e4f4e484b2f88c57beb18b82cb5c32ef45f223</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Rotman, Michael</creatorcontrib><creatorcontrib>Wolf, Lior</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Rotman, Michael</au><au>Wolf, Lior</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Natural Statistics of Network Activations and Implications for Knowledge Distillation</atitle><date>2021-06-01</date><risdate>2021</risdate><abstract>In a matter that is analog to the study of natural image statistics, we study the natural statistics of the deep neural network activations at various layers. As we show, these statistics, similar to image statistics, follow a power law. We also show, both analytically and empirically, that with depth the exponent of this power law increases at a linear rate. As a direct implication of our discoveries, we present a method for performing Knowledge Distillation (KD). While classical KD methods consider the logits of the teacher network, more recent methods obtain a leap in performance by considering the activation maps. This, however, uses metrics that are suitable for comparing images. We propose to employ two additional loss terms that are based on the spectral properties of the intermediate activation maps. The proposed method obtains state of the art results on multiple image recognition KD benchmarks.</abstract><doi>10.48550/arxiv.2106.00368</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2106.00368
ispartof
issn
language eng
recordid cdi_arxiv_primary_2106_00368
source arXiv.org
subjects Computer Science - Computer Vision and Pattern Recognition
title Natural Statistics of Network Activations and Implications for Knowledge Distillation
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-17T03%3A05%3A28IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Natural%20Statistics%20of%20Network%20Activations%20and%20Implications%20for%20Knowledge%20Distillation&rft.au=Rotman,%20Michael&rft.date=2021-06-01&rft_id=info:doi/10.48550/arxiv.2106.00368&rft_dat=%3Carxiv_GOX%3E2106_00368%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true