Pea-KD: Parameter-efficient and Accurate Knowledge Distillation on BERT

How can we efficiently compress a model while maintaining its performance? Knowledge Distillation (KD) is one of the widely known methods for model compression. In essence, KD trains a smaller student model based on a larger teacher model and tries to retain the teacher model's level of perform...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Cho, Ikhyun, Kang, U
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Cho, Ikhyun
Kang, U
description How can we efficiently compress a model while maintaining its performance? Knowledge Distillation (KD) is one of the widely known methods for model compression. In essence, KD trains a smaller student model based on a larger teacher model and tries to retain the teacher model's level of performance as much as possible. However, existing KD methods suffer from the following limitations. First, since the student model is smaller in absolute size, it inherently lacks model capacity. Second, the absence of an initial guide for the student model makes it difficult for the student to imitate the teacher model to its fullest. Conventional KD methods yield low performance due to these limitations. In this paper, we propose Pea-KD (Parameter-efficient and accurate Knowledge Distillation), a novel approach to KD. Pea-KD consists of two main parts: Shuffled Parameter Sharing (SPS) and Pretraining with Teacher's Predictions (PTP). Using this combination, we are capable of alleviating the KD's limitations. SPS is a new parameter sharing method that increases the student model capacity. PTP is a KD-specialized initialization method, which can act as a good initial guide for the student. When combined, this method yields a significant increase in student model's performance. Experiments conducted on BERT with different datasets and tasks show that the proposed approach improves the student model's performance by 4.4\% on average in four GLUE tasks, outperforming existing KD baselines by significant margins.
doi_str_mv 10.48550/arxiv.2009.14822
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2009_14822</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2009_14822</sourcerecordid><originalsourceid>FETCH-LOGICAL-a672-a4f4cccb587f936e4e17a0bea5d24c463c1936e6ee649746784734bb18b134e3</originalsourceid><addsrcrecordid>eNotz81KAzEUBeBsXEj1AVyZF8iYnztJxl1ta5UWLNr9cJO5I4HpVNL49_baKhw4cBYHPsaulKzA17W8wfyVPiotZVMp8Fqfs-WGUKzmt3yDGXdUKAvq-xQTjYXj2PFpjO8ZC_HVuP8cqHslPk-HkoYBS9qP_Dd3i-ftBTvrcTjQ5X9P2Mv9Yjt7EOun5eNsuhZonRYIPcQYQ-1d3xhLQMqhDIR1pyGCNVEdZ0tkoXFgnQdnIATlgzJAZsKu_15PkPYtpx3m7_YIak8g8wNPvUTN</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Pea-KD: Parameter-efficient and Accurate Knowledge Distillation on BERT</title><source>arXiv.org</source><creator>Cho, Ikhyun ; Kang, U</creator><creatorcontrib>Cho, Ikhyun ; Kang, U</creatorcontrib><description>How can we efficiently compress a model while maintaining its performance? Knowledge Distillation (KD) is one of the widely known methods for model compression. In essence, KD trains a smaller student model based on a larger teacher model and tries to retain the teacher model's level of performance as much as possible. However, existing KD methods suffer from the following limitations. First, since the student model is smaller in absolute size, it inherently lacks model capacity. Second, the absence of an initial guide for the student model makes it difficult for the student to imitate the teacher model to its fullest. Conventional KD methods yield low performance due to these limitations. In this paper, we propose Pea-KD (Parameter-efficient and accurate Knowledge Distillation), a novel approach to KD. Pea-KD consists of two main parts: Shuffled Parameter Sharing (SPS) and Pretraining with Teacher's Predictions (PTP). Using this combination, we are capable of alleviating the KD's limitations. SPS is a new parameter sharing method that increases the student model capacity. PTP is a KD-specialized initialization method, which can act as a good initial guide for the student. When combined, this method yields a significant increase in student model's performance. Experiments conducted on BERT with different datasets and tasks show that the proposed approach improves the student model's performance by 4.4\% on average in four GLUE tasks, outperforming existing KD baselines by significant margins.</description><identifier>DOI: 10.48550/arxiv.2009.14822</identifier><language>eng</language><subject>Computer Science - Learning ; Statistics - Machine Learning</subject><creationdate>2020-09</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2009.14822$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2009.14822$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Cho, Ikhyun</creatorcontrib><creatorcontrib>Kang, U</creatorcontrib><title>Pea-KD: Parameter-efficient and Accurate Knowledge Distillation on BERT</title><description>How can we efficiently compress a model while maintaining its performance? Knowledge Distillation (KD) is one of the widely known methods for model compression. In essence, KD trains a smaller student model based on a larger teacher model and tries to retain the teacher model's level of performance as much as possible. However, existing KD methods suffer from the following limitations. First, since the student model is smaller in absolute size, it inherently lacks model capacity. Second, the absence of an initial guide for the student model makes it difficult for the student to imitate the teacher model to its fullest. Conventional KD methods yield low performance due to these limitations. In this paper, we propose Pea-KD (Parameter-efficient and accurate Knowledge Distillation), a novel approach to KD. Pea-KD consists of two main parts: Shuffled Parameter Sharing (SPS) and Pretraining with Teacher's Predictions (PTP). Using this combination, we are capable of alleviating the KD's limitations. SPS is a new parameter sharing method that increases the student model capacity. PTP is a KD-specialized initialization method, which can act as a good initial guide for the student. When combined, this method yields a significant increase in student model's performance. Experiments conducted on BERT with different datasets and tasks show that the proposed approach improves the student model's performance by 4.4\% on average in four GLUE tasks, outperforming existing KD baselines by significant margins.</description><subject>Computer Science - Learning</subject><subject>Statistics - Machine Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotz81KAzEUBeBsXEj1AVyZF8iYnztJxl1ta5UWLNr9cJO5I4HpVNL49_baKhw4cBYHPsaulKzA17W8wfyVPiotZVMp8Fqfs-WGUKzmt3yDGXdUKAvq-xQTjYXj2PFpjO8ZC_HVuP8cqHslPk-HkoYBS9qP_Dd3i-ftBTvrcTjQ5X9P2Mv9Yjt7EOun5eNsuhZonRYIPcQYQ-1d3xhLQMqhDIR1pyGCNVEdZ0tkoXFgnQdnIATlgzJAZsKu_15PkPYtpx3m7_YIak8g8wNPvUTN</recordid><startdate>20200930</startdate><enddate>20200930</enddate><creator>Cho, Ikhyun</creator><creator>Kang, U</creator><scope>AKY</scope><scope>EPD</scope><scope>GOX</scope></search><sort><creationdate>20200930</creationdate><title>Pea-KD: Parameter-efficient and Accurate Knowledge Distillation on BERT</title><author>Cho, Ikhyun ; Kang, U</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a672-a4f4cccb587f936e4e17a0bea5d24c463c1936e6ee649746784734bb18b134e3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Computer Science - Learning</topic><topic>Statistics - Machine Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Cho, Ikhyun</creatorcontrib><creatorcontrib>Kang, U</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv Statistics</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Cho, Ikhyun</au><au>Kang, U</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Pea-KD: Parameter-efficient and Accurate Knowledge Distillation on BERT</atitle><date>2020-09-30</date><risdate>2020</risdate><abstract>How can we efficiently compress a model while maintaining its performance? Knowledge Distillation (KD) is one of the widely known methods for model compression. In essence, KD trains a smaller student model based on a larger teacher model and tries to retain the teacher model's level of performance as much as possible. However, existing KD methods suffer from the following limitations. First, since the student model is smaller in absolute size, it inherently lacks model capacity. Second, the absence of an initial guide for the student model makes it difficult for the student to imitate the teacher model to its fullest. Conventional KD methods yield low performance due to these limitations. In this paper, we propose Pea-KD (Parameter-efficient and accurate Knowledge Distillation), a novel approach to KD. Pea-KD consists of two main parts: Shuffled Parameter Sharing (SPS) and Pretraining with Teacher's Predictions (PTP). Using this combination, we are capable of alleviating the KD's limitations. SPS is a new parameter sharing method that increases the student model capacity. PTP is a KD-specialized initialization method, which can act as a good initial guide for the student. When combined, this method yields a significant increase in student model's performance. Experiments conducted on BERT with different datasets and tasks show that the proposed approach improves the student model's performance by 4.4\% on average in four GLUE tasks, outperforming existing KD baselines by significant margins.</abstract><doi>10.48550/arxiv.2009.14822</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2009.14822
ispartof
issn
language eng
recordid cdi_arxiv_primary_2009_14822
source arXiv.org
subjects Computer Science - Learning
Statistics - Machine Learning
title Pea-KD: Parameter-efficient and Accurate Knowledge Distillation on BERT
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-25T04%3A44%3A15IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Pea-KD:%20Parameter-efficient%20and%20Accurate%20Knowledge%20Distillation%20on%20BERT&rft.au=Cho,%20Ikhyun&rft.date=2020-09-30&rft_id=info:doi/10.48550/arxiv.2009.14822&rft_dat=%3Carxiv_GOX%3E2009_14822%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true