eDKM: An Efficient and Accurate Train-time Weight Clustering for Large Language Models

Since Large Language Models or LLMs have demonstrated high-quality performance on many complex language tasks, there is a great interest in bringing these LLMs to mobile devices for faster responses and better privacy protection. However, the size of LLMs (i.e., billions of parameters) requires high...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2023-09
Hauptverfasser: Cho, Minsik, Vahid, Keivan A, Fu, Qichen, Adya, Saurabh, Del Mundo, Carlo C, Rastegari, Mohammad, Naik, Devang, Zatloukal, Peter
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Cho, Minsik
Vahid, Keivan A
Fu, Qichen
Adya, Saurabh
Del Mundo, Carlo C
Rastegari, Mohammad
Naik, Devang
Zatloukal, Peter
description Since Large Language Models or LLMs have demonstrated high-quality performance on many complex language tasks, there is a great interest in bringing these LLMs to mobile devices for faster responses and better privacy protection. However, the size of LLMs (i.e., billions of parameters) requires highly effective compression to fit into storage-limited devices. Among many compression techniques, weight-clustering, a form of non-linear quantization, is one of the leading candidates for LLM compression, and supported by modern smartphones. Yet, its training overhead is prohibitively significant for LLM fine-tuning. Especially, Differentiable KMeans Clustering, or DKM, has shown the state-of-the-art trade-off between compression ratio and accuracy regression, but its large memory complexity makes it nearly impossible to apply to train-time LLM compression. In this paper, we propose a memory-efficient DKM implementation, eDKM powered by novel techniques to reduce the memory footprint of DKM by orders of magnitudes. For a given tensor to be saved on CPU for the backward pass of DKM, we compressed the tensor by applying uniquification and sharding after checking if there is no duplicated tensor previously copied to CPU. Our experimental results demonstrate that \prjname can fine-tune and compress a pretrained LLaMA 7B model from 12.6 GB to 2.5 GB (3bit/weight) with the Alpaca dataset by reducing the train-time memory footprint of a decoder layer by 130\(\times\), while delivering good accuracy on broader LLM benchmarks (i.e., 77.7% for PIQA, 66.1% for Winograde, and so on).
format Article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2861509898</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2861509898</sourcerecordid><originalsourceid>FETCH-proquest_journals_28615098983</originalsourceid><addsrcrecordid>eNqNis0KgkAURocgSMp3uNBa0Jk0bSdmBNVOaimDXqcRm6n5ef9c9ABtvnPgfAsSUMaSKN9RuiKhtWMcxzTb0zRlAbnj8XI7QKmgHgbZSVQOuOqh7DpvuENoDJcqcvKF8EApng6qyVuHRioBgzZw5UbgvEp4PstN9zjZDVkOfLIY_rgm21PdVOfobfTHo3XtqL1Rc2ppniVpXORFzv57fQEnfj_Z</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2861509898</pqid></control><display><type>article</type><title>eDKM: An Efficient and Accurate Train-time Weight Clustering for Large Language Models</title><source>Free E- Journals</source><creator>Cho, Minsik ; Vahid, Keivan A ; Fu, Qichen ; Adya, Saurabh ; Del Mundo, Carlo C ; Rastegari, Mohammad ; Naik, Devang ; Zatloukal, Peter</creator><creatorcontrib>Cho, Minsik ; Vahid, Keivan A ; Fu, Qichen ; Adya, Saurabh ; Del Mundo, Carlo C ; Rastegari, Mohammad ; Naik, Devang ; Zatloukal, Peter</creatorcontrib><description>Since Large Language Models or LLMs have demonstrated high-quality performance on many complex language tasks, there is a great interest in bringing these LLMs to mobile devices for faster responses and better privacy protection. However, the size of LLMs (i.e., billions of parameters) requires highly effective compression to fit into storage-limited devices. Among many compression techniques, weight-clustering, a form of non-linear quantization, is one of the leading candidates for LLM compression, and supported by modern smartphones. Yet, its training overhead is prohibitively significant for LLM fine-tuning. Especially, Differentiable KMeans Clustering, or DKM, has shown the state-of-the-art trade-off between compression ratio and accuracy regression, but its large memory complexity makes it nearly impossible to apply to train-time LLM compression. In this paper, we propose a memory-efficient DKM implementation, eDKM powered by novel techniques to reduce the memory footprint of DKM by orders of magnitudes. For a given tensor to be saved on CPU for the backward pass of DKM, we compressed the tensor by applying uniquification and sharding after checking if there is no duplicated tensor previously copied to CPU. Our experimental results demonstrate that \prjname can fine-tune and compress a pretrained LLaMA 7B model from 12.6 GB to 2.5 GB (3bit/weight) with the Alpaca dataset by reducing the train-time memory footprint of a decoder layer by 130\(\times\), while delivering good accuracy on broader LLM benchmarks (i.e., 77.7% for PIQA, 66.1% for Winograde, and so on).</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Accuracy ; Clustering ; Compression ratio ; Large language models ; Task complexity ; Tensors</subject><ispartof>arXiv.org, 2023-09</ispartof><rights>2023. This work is published under http://creativecommons.org/licenses/by-nc-sa/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>776,780</link.rule.ids></links><search><creatorcontrib>Cho, Minsik</creatorcontrib><creatorcontrib>Vahid, Keivan A</creatorcontrib><creatorcontrib>Fu, Qichen</creatorcontrib><creatorcontrib>Adya, Saurabh</creatorcontrib><creatorcontrib>Del Mundo, Carlo C</creatorcontrib><creatorcontrib>Rastegari, Mohammad</creatorcontrib><creatorcontrib>Naik, Devang</creatorcontrib><creatorcontrib>Zatloukal, Peter</creatorcontrib><title>eDKM: An Efficient and Accurate Train-time Weight Clustering for Large Language Models</title><title>arXiv.org</title><description>Since Large Language Models or LLMs have demonstrated high-quality performance on many complex language tasks, there is a great interest in bringing these LLMs to mobile devices for faster responses and better privacy protection. However, the size of LLMs (i.e., billions of parameters) requires highly effective compression to fit into storage-limited devices. Among many compression techniques, weight-clustering, a form of non-linear quantization, is one of the leading candidates for LLM compression, and supported by modern smartphones. Yet, its training overhead is prohibitively significant for LLM fine-tuning. Especially, Differentiable KMeans Clustering, or DKM, has shown the state-of-the-art trade-off between compression ratio and accuracy regression, but its large memory complexity makes it nearly impossible to apply to train-time LLM compression. In this paper, we propose a memory-efficient DKM implementation, eDKM powered by novel techniques to reduce the memory footprint of DKM by orders of magnitudes. For a given tensor to be saved on CPU for the backward pass of DKM, we compressed the tensor by applying uniquification and sharding after checking if there is no duplicated tensor previously copied to CPU. Our experimental results demonstrate that \prjname can fine-tune and compress a pretrained LLaMA 7B model from 12.6 GB to 2.5 GB (3bit/weight) with the Alpaca dataset by reducing the train-time memory footprint of a decoder layer by 130\(\times\), while delivering good accuracy on broader LLM benchmarks (i.e., 77.7% for PIQA, 66.1% for Winograde, and so on).</description><subject>Accuracy</subject><subject>Clustering</subject><subject>Compression ratio</subject><subject>Large language models</subject><subject>Task complexity</subject><subject>Tensors</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>BENPR</sourceid><recordid>eNqNis0KgkAURocgSMp3uNBa0Jk0bSdmBNVOaimDXqcRm6n5ef9c9ABtvnPgfAsSUMaSKN9RuiKhtWMcxzTb0zRlAbnj8XI7QKmgHgbZSVQOuOqh7DpvuENoDJcqcvKF8EApng6qyVuHRioBgzZw5UbgvEp4PstN9zjZDVkOfLIY_rgm21PdVOfobfTHo3XtqL1Rc2ppniVpXORFzv57fQEnfj_Z</recordid><startdate>20230913</startdate><enddate>20230913</enddate><creator>Cho, Minsik</creator><creator>Vahid, Keivan A</creator><creator>Fu, Qichen</creator><creator>Adya, Saurabh</creator><creator>Del Mundo, Carlo C</creator><creator>Rastegari, Mohammad</creator><creator>Naik, Devang</creator><creator>Zatloukal, Peter</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PTHSS</scope></search><sort><creationdate>20230913</creationdate><title>eDKM: An Efficient and Accurate Train-time Weight Clustering for Large Language Models</title><author>Cho, Minsik ; Vahid, Keivan A ; Fu, Qichen ; Adya, Saurabh ; Del Mundo, Carlo C ; Rastegari, Mohammad ; Naik, Devang ; Zatloukal, Peter</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_28615098983</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Accuracy</topic><topic>Clustering</topic><topic>Compression ratio</topic><topic>Large language models</topic><topic>Task complexity</topic><topic>Tensors</topic><toplevel>online_resources</toplevel><creatorcontrib>Cho, Minsik</creatorcontrib><creatorcontrib>Vahid, Keivan A</creatorcontrib><creatorcontrib>Fu, Qichen</creatorcontrib><creatorcontrib>Adya, Saurabh</creatorcontrib><creatorcontrib>Del Mundo, Carlo C</creatorcontrib><creatorcontrib>Rastegari, Mohammad</creatorcontrib><creatorcontrib>Naik, Devang</creatorcontrib><creatorcontrib>Zatloukal, Peter</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Cho, Minsik</au><au>Vahid, Keivan A</au><au>Fu, Qichen</au><au>Adya, Saurabh</au><au>Del Mundo, Carlo C</au><au>Rastegari, Mohammad</au><au>Naik, Devang</au><au>Zatloukal, Peter</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>eDKM: An Efficient and Accurate Train-time Weight Clustering for Large Language Models</atitle><jtitle>arXiv.org</jtitle><date>2023-09-13</date><risdate>2023</risdate><eissn>2331-8422</eissn><abstract>Since Large Language Models or LLMs have demonstrated high-quality performance on many complex language tasks, there is a great interest in bringing these LLMs to mobile devices for faster responses and better privacy protection. However, the size of LLMs (i.e., billions of parameters) requires highly effective compression to fit into storage-limited devices. Among many compression techniques, weight-clustering, a form of non-linear quantization, is one of the leading candidates for LLM compression, and supported by modern smartphones. Yet, its training overhead is prohibitively significant for LLM fine-tuning. Especially, Differentiable KMeans Clustering, or DKM, has shown the state-of-the-art trade-off between compression ratio and accuracy regression, but its large memory complexity makes it nearly impossible to apply to train-time LLM compression. In this paper, we propose a memory-efficient DKM implementation, eDKM powered by novel techniques to reduce the memory footprint of DKM by orders of magnitudes. For a given tensor to be saved on CPU for the backward pass of DKM, we compressed the tensor by applying uniquification and sharding after checking if there is no duplicated tensor previously copied to CPU. Our experimental results demonstrate that \prjname can fine-tune and compress a pretrained LLaMA 7B model from 12.6 GB to 2.5 GB (3bit/weight) with the Alpaca dataset by reducing the train-time memory footprint of a decoder layer by 130\(\times\), while delivering good accuracy on broader LLM benchmarks (i.e., 77.7% for PIQA, 66.1% for Winograde, and so on).</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2023-09
issn 2331-8422
language eng
recordid cdi_proquest_journals_2861509898
source Free E- Journals
subjects Accuracy
Clustering
Compression ratio
Large language models
Task complexity
Tensors
title eDKM: An Efficient and Accurate Train-time Weight Clustering for Large Language Models
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-30T03%3A36%3A38IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=eDKM:%20An%20Efficient%20and%20Accurate%20Train-time%20Weight%20Clustering%20for%20Large%20Language%20Models&rft.jtitle=arXiv.org&rft.au=Cho,%20Minsik&rft.date=2023-09-13&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2861509898%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2861509898&rft_id=info:pmid/&rfr_iscdi=true