ASVD: Activation-aware Singular Value Decomposition for Compressing Large Language Models

In this paper, we introduce a new post-training compression paradigm for Large Language Models (LLMs) to facilitate their wider adoption. We delve into LLM weight low-rank decomposition, and find that the challenges of this task stem from the distribution variance in the LLM activations and the sens...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Yuan, Zhihang, Shang, Yuzhang, Song, Yue, Wu, Qiang, Yan, Yan, Sun, Guangyu
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Yuan, Zhihang
Shang, Yuzhang
Song, Yue
Wu, Qiang
Yan, Yan
Sun, Guangyu
description In this paper, we introduce a new post-training compression paradigm for Large Language Models (LLMs) to facilitate their wider adoption. We delve into LLM weight low-rank decomposition, and find that the challenges of this task stem from the distribution variance in the LLM activations and the sensitivity difference among various kinds of layers. To address these issues, we propose a training-free approach called Activation-aware Singular Value Decomposition (ASVD). Specifically, ASVD manages activation outliers by transforming the weight matrix based on the activation distribution. This transformation allows the outliers in the activation matrix to be absorbed into the transformed weight matrix, thereby enhancing decomposition accuracy. Additionally, we propose an efficient iterative calibration process to optimize layer-specific decomposition by addressing the varying sensitivity of different LLM layers. In this way, ASVD can compress a network by 10%-30%. Based on the success of the low-rank decomposition of projection matrices in the self-attention module, we further introduce ASVD to compress the KV cache. By reducing the channel dimension of KV activations, memory requirements for KV cache can be largely reduced. ASVD can further achieve 50% KV cache reductions without performance drop in a training-free manner. Code is anonymously available in supplementary materials.
doi_str_mv 10.48550/arxiv.2312.05821
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2312_05821</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2312_05821</sourcerecordid><originalsourceid>FETCH-LOGICAL-a671-c2abefbdff6d5d81b7486232636183bcf0d41049cfaea084d0ad77979810dc7c3</originalsourceid><addsrcrecordid>eNotj8tqwzAQRbXpoiT9gK6qH7CrlyU5O-P0BS5dJAS6MmM9gsCJgpSk7d_XSbu5cwfODByE7ikpha4q8gjpO5xLxikrSaUZvUWfzWqzXODGHMMZjiHuC_iC5PAq7LenERLewHhyeOlM3B1iDhcE-5hwO-3J5TxxuIO0dVNOJzCV92jdmOfoxsOY3d3_nKH189O6fS26j5e3tukKkIoWhsHg_GC9l7aymg5KaMk4k1xSzQfjiRWUiNp4cEC0sASsUrWqNSXWKMNn6OHv7dWtP6Swg_TTXxz7qyP_BU8JTRo</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>ASVD: Activation-aware Singular Value Decomposition for Compressing Large Language Models</title><source>arXiv.org</source><creator>Yuan, Zhihang ; Shang, Yuzhang ; Song, Yue ; Wu, Qiang ; Yan, Yan ; Sun, Guangyu</creator><creatorcontrib>Yuan, Zhihang ; Shang, Yuzhang ; Song, Yue ; Wu, Qiang ; Yan, Yan ; Sun, Guangyu</creatorcontrib><description>In this paper, we introduce a new post-training compression paradigm for Large Language Models (LLMs) to facilitate their wider adoption. We delve into LLM weight low-rank decomposition, and find that the challenges of this task stem from the distribution variance in the LLM activations and the sensitivity difference among various kinds of layers. To address these issues, we propose a training-free approach called Activation-aware Singular Value Decomposition (ASVD). Specifically, ASVD manages activation outliers by transforming the weight matrix based on the activation distribution. This transformation allows the outliers in the activation matrix to be absorbed into the transformed weight matrix, thereby enhancing decomposition accuracy. Additionally, we propose an efficient iterative calibration process to optimize layer-specific decomposition by addressing the varying sensitivity of different LLM layers. In this way, ASVD can compress a network by 10%-30%. Based on the success of the low-rank decomposition of projection matrices in the self-attention module, we further introduce ASVD to compress the KV cache. By reducing the channel dimension of KV activations, memory requirements for KV cache can be largely reduced. ASVD can further achieve 50% KV cache reductions without performance drop in a training-free manner. Code is anonymously available in supplementary materials.</description><identifier>DOI: 10.48550/arxiv.2312.05821</identifier><language>eng</language><subject>Computer Science - Computation and Language</subject><creationdate>2023-12</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2312.05821$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2312.05821$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Yuan, Zhihang</creatorcontrib><creatorcontrib>Shang, Yuzhang</creatorcontrib><creatorcontrib>Song, Yue</creatorcontrib><creatorcontrib>Wu, Qiang</creatorcontrib><creatorcontrib>Yan, Yan</creatorcontrib><creatorcontrib>Sun, Guangyu</creatorcontrib><title>ASVD: Activation-aware Singular Value Decomposition for Compressing Large Language Models</title><description>In this paper, we introduce a new post-training compression paradigm for Large Language Models (LLMs) to facilitate their wider adoption. We delve into LLM weight low-rank decomposition, and find that the challenges of this task stem from the distribution variance in the LLM activations and the sensitivity difference among various kinds of layers. To address these issues, we propose a training-free approach called Activation-aware Singular Value Decomposition (ASVD). Specifically, ASVD manages activation outliers by transforming the weight matrix based on the activation distribution. This transformation allows the outliers in the activation matrix to be absorbed into the transformed weight matrix, thereby enhancing decomposition accuracy. Additionally, we propose an efficient iterative calibration process to optimize layer-specific decomposition by addressing the varying sensitivity of different LLM layers. In this way, ASVD can compress a network by 10%-30%. Based on the success of the low-rank decomposition of projection matrices in the self-attention module, we further introduce ASVD to compress the KV cache. By reducing the channel dimension of KV activations, memory requirements for KV cache can be largely reduced. ASVD can further achieve 50% KV cache reductions without performance drop in a training-free manner. Code is anonymously available in supplementary materials.</description><subject>Computer Science - Computation and Language</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj8tqwzAQRbXpoiT9gK6qH7CrlyU5O-P0BS5dJAS6MmM9gsCJgpSk7d_XSbu5cwfODByE7ikpha4q8gjpO5xLxikrSaUZvUWfzWqzXODGHMMZjiHuC_iC5PAq7LenERLewHhyeOlM3B1iDhcE-5hwO-3J5TxxuIO0dVNOJzCV92jdmOfoxsOY3d3_nKH189O6fS26j5e3tukKkIoWhsHg_GC9l7aymg5KaMk4k1xSzQfjiRWUiNp4cEC0sASsUrWqNSXWKMNn6OHv7dWtP6Swg_TTXxz7qyP_BU8JTRo</recordid><startdate>20231210</startdate><enddate>20231210</enddate><creator>Yuan, Zhihang</creator><creator>Shang, Yuzhang</creator><creator>Song, Yue</creator><creator>Wu, Qiang</creator><creator>Yan, Yan</creator><creator>Sun, Guangyu</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20231210</creationdate><title>ASVD: Activation-aware Singular Value Decomposition for Compressing Large Language Models</title><author>Yuan, Zhihang ; Shang, Yuzhang ; Song, Yue ; Wu, Qiang ; Yan, Yan ; Sun, Guangyu</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a671-c2abefbdff6d5d81b7486232636183bcf0d41049cfaea084d0ad77979810dc7c3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science - Computation and Language</topic><toplevel>online_resources</toplevel><creatorcontrib>Yuan, Zhihang</creatorcontrib><creatorcontrib>Shang, Yuzhang</creatorcontrib><creatorcontrib>Song, Yue</creatorcontrib><creatorcontrib>Wu, Qiang</creatorcontrib><creatorcontrib>Yan, Yan</creatorcontrib><creatorcontrib>Sun, Guangyu</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Yuan, Zhihang</au><au>Shang, Yuzhang</au><au>Song, Yue</au><au>Wu, Qiang</au><au>Yan, Yan</au><au>Sun, Guangyu</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>ASVD: Activation-aware Singular Value Decomposition for Compressing Large Language Models</atitle><date>2023-12-10</date><risdate>2023</risdate><abstract>In this paper, we introduce a new post-training compression paradigm for Large Language Models (LLMs) to facilitate their wider adoption. We delve into LLM weight low-rank decomposition, and find that the challenges of this task stem from the distribution variance in the LLM activations and the sensitivity difference among various kinds of layers. To address these issues, we propose a training-free approach called Activation-aware Singular Value Decomposition (ASVD). Specifically, ASVD manages activation outliers by transforming the weight matrix based on the activation distribution. This transformation allows the outliers in the activation matrix to be absorbed into the transformed weight matrix, thereby enhancing decomposition accuracy. Additionally, we propose an efficient iterative calibration process to optimize layer-specific decomposition by addressing the varying sensitivity of different LLM layers. In this way, ASVD can compress a network by 10%-30%. Based on the success of the low-rank decomposition of projection matrices in the self-attention module, we further introduce ASVD to compress the KV cache. By reducing the channel dimension of KV activations, memory requirements for KV cache can be largely reduced. ASVD can further achieve 50% KV cache reductions without performance drop in a training-free manner. Code is anonymously available in supplementary materials.</abstract><doi>10.48550/arxiv.2312.05821</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2312.05821
ispartof
issn
language eng
recordid cdi_arxiv_primary_2312_05821
source arXiv.org
subjects Computer Science - Computation and Language
title ASVD: Activation-aware Singular Value Decomposition for Compressing Large Language Models
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-04T21%3A33%3A40IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=ASVD:%20Activation-aware%20Singular%20Value%20Decomposition%20for%20Compressing%20Large%20Language%20Models&rft.au=Yuan,%20Zhihang&rft.date=2023-12-10&rft_id=info:doi/10.48550/arxiv.2312.05821&rft_dat=%3Carxiv_GOX%3E2312_05821%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true