CRVQ: Channel-relaxed Vector Quantization for Extreme Compression of LLMs
Powerful large language models (LLMs) are increasingly expected to be deployed with lower computational costs, enabling their capabilities on resource-constrained devices. Post-training quantization (PTQ) has emerged as a star approach to achieve this ambition, with best methods compressing weights...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Xu, Yuzhuang Ji, Shiyu Zhu, Qingfu Che, Wanxiang |
description | Powerful large language models (LLMs) are increasingly expected to be
deployed with lower computational costs, enabling their capabilities on
resource-constrained devices. Post-training quantization (PTQ) has emerged as a
star approach to achieve this ambition, with best methods compressing weights
to less than 2 bit on average. In this paper, we propose Channel-Relaxed Vector
Quantization (CRVQ), a novel technique that significantly improves the
performance of PTQ baselines at the cost of only minimal additional bits. This
state-of-the-art extreme compression method achieves its results through two
key innovations: (1) carefully selecting and reordering a very small subset of
critical weight channels, and (2) leveraging multiple codebooks to relax the
constraint of critical channels. With our method, we demonstrate a 38.9%
improvement over the current strongest sub-2-bit PTQ baseline, enabling nearer
lossless 1-bit compression. Furthermore, our approach offers flexible
customization of quantization bit-width and performance, providing a wider
range of deployment options for diverse hardware platforms. |
doi_str_mv | 10.48550/arxiv.2412.09282 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2412_09282</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2412_09282</sourcerecordid><originalsourceid>FETCH-arxiv_primary_2412_092823</originalsourceid><addsrcrecordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMjE00jOwNLIw4mTwdA4KC7RScM5IzMtLzdEtSs1JrEhNUQhLTS7JL1IILE3MK8msSizJzM9TSAMKuFaUFKXmpio45-cWFKUWF4PE89MUfHx8i3kYWNMSc4pTeaE0N4O8m2uIs4cu2NL4gqLM3MSiyniQ5fFgy40JqwAA3Xo5cg</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>CRVQ: Channel-relaxed Vector Quantization for Extreme Compression of LLMs</title><source>arXiv.org</source><creator>Xu, Yuzhuang ; Ji, Shiyu ; Zhu, Qingfu ; Che, Wanxiang</creator><creatorcontrib>Xu, Yuzhuang ; Ji, Shiyu ; Zhu, Qingfu ; Che, Wanxiang</creatorcontrib><description>Powerful large language models (LLMs) are increasingly expected to be
deployed with lower computational costs, enabling their capabilities on
resource-constrained devices. Post-training quantization (PTQ) has emerged as a
star approach to achieve this ambition, with best methods compressing weights
to less than 2 bit on average. In this paper, we propose Channel-Relaxed Vector
Quantization (CRVQ), a novel technique that significantly improves the
performance of PTQ baselines at the cost of only minimal additional bits. This
state-of-the-art extreme compression method achieves its results through two
key innovations: (1) carefully selecting and reordering a very small subset of
critical weight channels, and (2) leveraging multiple codebooks to relax the
constraint of critical channels. With our method, we demonstrate a 38.9%
improvement over the current strongest sub-2-bit PTQ baseline, enabling nearer
lossless 1-bit compression. Furthermore, our approach offers flexible
customization of quantization bit-width and performance, providing a wider
range of deployment options for diverse hardware platforms.</description><identifier>DOI: 10.48550/arxiv.2412.09282</identifier><language>eng</language><subject>Computer Science - Computation and Language ; Computer Science - Learning</subject><creationdate>2024-12</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2412.09282$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2412.09282$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Xu, Yuzhuang</creatorcontrib><creatorcontrib>Ji, Shiyu</creatorcontrib><creatorcontrib>Zhu, Qingfu</creatorcontrib><creatorcontrib>Che, Wanxiang</creatorcontrib><title>CRVQ: Channel-relaxed Vector Quantization for Extreme Compression of LLMs</title><description>Powerful large language models (LLMs) are increasingly expected to be
deployed with lower computational costs, enabling their capabilities on
resource-constrained devices. Post-training quantization (PTQ) has emerged as a
star approach to achieve this ambition, with best methods compressing weights
to less than 2 bit on average. In this paper, we propose Channel-Relaxed Vector
Quantization (CRVQ), a novel technique that significantly improves the
performance of PTQ baselines at the cost of only minimal additional bits. This
state-of-the-art extreme compression method achieves its results through two
key innovations: (1) carefully selecting and reordering a very small subset of
critical weight channels, and (2) leveraging multiple codebooks to relax the
constraint of critical channels. With our method, we demonstrate a 38.9%
improvement over the current strongest sub-2-bit PTQ baseline, enabling nearer
lossless 1-bit compression. Furthermore, our approach offers flexible
customization of quantization bit-width and performance, providing a wider
range of deployment options for diverse hardware platforms.</description><subject>Computer Science - Computation and Language</subject><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMjE00jOwNLIw4mTwdA4KC7RScM5IzMtLzdEtSs1JrEhNUQhLTS7JL1IILE3MK8msSizJzM9TSAMKuFaUFKXmpio45-cWFKUWF4PE89MUfHx8i3kYWNMSc4pTeaE0N4O8m2uIs4cu2NL4gqLM3MSiyniQ5fFgy40JqwAA3Xo5cg</recordid><startdate>20241212</startdate><enddate>20241212</enddate><creator>Xu, Yuzhuang</creator><creator>Ji, Shiyu</creator><creator>Zhu, Qingfu</creator><creator>Che, Wanxiang</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20241212</creationdate><title>CRVQ: Channel-relaxed Vector Quantization for Extreme Compression of LLMs</title><author>Xu, Yuzhuang ; Ji, Shiyu ; Zhu, Qingfu ; Che, Wanxiang</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-arxiv_primary_2412_092823</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Computation and Language</topic><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Xu, Yuzhuang</creatorcontrib><creatorcontrib>Ji, Shiyu</creatorcontrib><creatorcontrib>Zhu, Qingfu</creatorcontrib><creatorcontrib>Che, Wanxiang</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Xu, Yuzhuang</au><au>Ji, Shiyu</au><au>Zhu, Qingfu</au><au>Che, Wanxiang</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>CRVQ: Channel-relaxed Vector Quantization for Extreme Compression of LLMs</atitle><date>2024-12-12</date><risdate>2024</risdate><abstract>Powerful large language models (LLMs) are increasingly expected to be
deployed with lower computational costs, enabling their capabilities on
resource-constrained devices. Post-training quantization (PTQ) has emerged as a
star approach to achieve this ambition, with best methods compressing weights
to less than 2 bit on average. In this paper, we propose Channel-Relaxed Vector
Quantization (CRVQ), a novel technique that significantly improves the
performance of PTQ baselines at the cost of only minimal additional bits. This
state-of-the-art extreme compression method achieves its results through two
key innovations: (1) carefully selecting and reordering a very small subset of
critical weight channels, and (2) leveraging multiple codebooks to relax the
constraint of critical channels. With our method, we demonstrate a 38.9%
improvement over the current strongest sub-2-bit PTQ baseline, enabling nearer
lossless 1-bit compression. Furthermore, our approach offers flexible
customization of quantization bit-width and performance, providing a wider
range of deployment options for diverse hardware platforms.</abstract><doi>10.48550/arxiv.2412.09282</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2412.09282 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2412_09282 |
source | arXiv.org |
subjects | Computer Science - Computation and Language Computer Science - Learning |
title | CRVQ: Channel-relaxed Vector Quantization for Extreme Compression of LLMs |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-25T22%3A28%3A15IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=CRVQ:%20Channel-relaxed%20Vector%20Quantization%20for%20Extreme%20Compression%20of%20LLMs&rft.au=Xu,%20Yuzhuang&rft.date=2024-12-12&rft_id=info:doi/10.48550/arxiv.2412.09282&rft_dat=%3Carxiv_GOX%3E2412_09282%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |