Res-Attn : An Enhanced Res-Tuning Approach with Lightweight Attention Mechanism

Res-Tuning introduces a flexible and efficient paradigm for model tuning, showing that tuners decoupled from the backbone network can achieve performance comparable to traditional methods. Existing methods commonly construct the tuner as a set of trainable low-rank decomposition matrices, positing t...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Mao, Chaojie, Jiang, Zeyinzi
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Mao, Chaojie
Jiang, Zeyinzi
description Res-Tuning introduces a flexible and efficient paradigm for model tuning, showing that tuners decoupled from the backbone network can achieve performance comparable to traditional methods. Existing methods commonly construct the tuner as a set of trainable low-rank decomposition matrices, positing that a low-rank subspace suffices for adapting pre-trained foundational models to new scenarios. In this work, we present an advanced, efficient tuner augmented with low-rank attention, termed Res-Attn , which also adheres to the Res-Tuning framework. Res-Attn utilizes a parallel multi-head attention module equipped with low-rank projections for query, key, and value to execute streamlined attention operations. Through training this lightweight attention module, Res-Attn facilitates adaptation to new scenarios. Our extensive experiments across a range of discriminative and generative tasks showcase the superior performance of our method when compared to existing alternatives
doi_str_mv 10.48550/arxiv.2312.16916
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2312_16916</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2312_16916</sourcerecordid><originalsourceid>FETCH-LOGICAL-a676-715bc12f04a3b14254f081af8ede1d7167f4417afcf4b6a9b024630ccdac5eda3</originalsourceid><addsrcrecordid>eNotj8tKw0AYhWfjQqoP4Mp5gcT555q6C6VeIFKQ7MOfuTQDdhqS0erb21Q358CB78BHyB2wUlZKsQecvuNXyQXwEvQa9DXZvfu5qHNO9JHWiW7TgMl6R5e5_Uwx7Wk9jtMR7UBPMQ-0ifshn_yS9Mz5lOMx0Tdvz2CcDzfkKuDH7G__e0Xap227eSma3fPrpm4K1EYXBlRvgQcmUfQguZKBVYCh8s6DM6BNkBIMBhtkr3HdMy61YNY6tMo7FCty_3d7MerGKR5w-ukWs-5iJn4BSmNJBw</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Res-Attn : An Enhanced Res-Tuning Approach with Lightweight Attention Mechanism</title><source>arXiv.org</source><creator>Mao, Chaojie ; Jiang, Zeyinzi</creator><creatorcontrib>Mao, Chaojie ; Jiang, Zeyinzi</creatorcontrib><description>Res-Tuning introduces a flexible and efficient paradigm for model tuning, showing that tuners decoupled from the backbone network can achieve performance comparable to traditional methods. Existing methods commonly construct the tuner as a set of trainable low-rank decomposition matrices, positing that a low-rank subspace suffices for adapting pre-trained foundational models to new scenarios. In this work, we present an advanced, efficient tuner augmented with low-rank attention, termed Res-Attn , which also adheres to the Res-Tuning framework. Res-Attn utilizes a parallel multi-head attention module equipped with low-rank projections for query, key, and value to execute streamlined attention operations. Through training this lightweight attention module, Res-Attn facilitates adaptation to new scenarios. Our extensive experiments across a range of discriminative and generative tasks showcase the superior performance of our method when compared to existing alternatives</description><identifier>DOI: 10.48550/arxiv.2312.16916</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition</subject><creationdate>2023-12</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2312.16916$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2312.16916$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Mao, Chaojie</creatorcontrib><creatorcontrib>Jiang, Zeyinzi</creatorcontrib><title>Res-Attn : An Enhanced Res-Tuning Approach with Lightweight Attention Mechanism</title><description>Res-Tuning introduces a flexible and efficient paradigm for model tuning, showing that tuners decoupled from the backbone network can achieve performance comparable to traditional methods. Existing methods commonly construct the tuner as a set of trainable low-rank decomposition matrices, positing that a low-rank subspace suffices for adapting pre-trained foundational models to new scenarios. In this work, we present an advanced, efficient tuner augmented with low-rank attention, termed Res-Attn , which also adheres to the Res-Tuning framework. Res-Attn utilizes a parallel multi-head attention module equipped with low-rank projections for query, key, and value to execute streamlined attention operations. Through training this lightweight attention module, Res-Attn facilitates adaptation to new scenarios. Our extensive experiments across a range of discriminative and generative tasks showcase the superior performance of our method when compared to existing alternatives</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj8tKw0AYhWfjQqoP4Mp5gcT555q6C6VeIFKQ7MOfuTQDdhqS0erb21Q358CB78BHyB2wUlZKsQecvuNXyQXwEvQa9DXZvfu5qHNO9JHWiW7TgMl6R5e5_Uwx7Wk9jtMR7UBPMQ-0ifshn_yS9Mz5lOMx0Tdvz2CcDzfkKuDH7G__e0Xap227eSma3fPrpm4K1EYXBlRvgQcmUfQguZKBVYCh8s6DM6BNkBIMBhtkr3HdMy61YNY6tMo7FCty_3d7MerGKR5w-ukWs-5iJn4BSmNJBw</recordid><startdate>20231228</startdate><enddate>20231228</enddate><creator>Mao, Chaojie</creator><creator>Jiang, Zeyinzi</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20231228</creationdate><title>Res-Attn : An Enhanced Res-Tuning Approach with Lightweight Attention Mechanism</title><author>Mao, Chaojie ; Jiang, Zeyinzi</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a676-715bc12f04a3b14254f081af8ede1d7167f4417afcf4b6a9b024630ccdac5eda3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Mao, Chaojie</creatorcontrib><creatorcontrib>Jiang, Zeyinzi</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Mao, Chaojie</au><au>Jiang, Zeyinzi</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Res-Attn : An Enhanced Res-Tuning Approach with Lightweight Attention Mechanism</atitle><date>2023-12-28</date><risdate>2023</risdate><abstract>Res-Tuning introduces a flexible and efficient paradigm for model tuning, showing that tuners decoupled from the backbone network can achieve performance comparable to traditional methods. Existing methods commonly construct the tuner as a set of trainable low-rank decomposition matrices, positing that a low-rank subspace suffices for adapting pre-trained foundational models to new scenarios. In this work, we present an advanced, efficient tuner augmented with low-rank attention, termed Res-Attn , which also adheres to the Res-Tuning framework. Res-Attn utilizes a parallel multi-head attention module equipped with low-rank projections for query, key, and value to execute streamlined attention operations. Through training this lightweight attention module, Res-Attn facilitates adaptation to new scenarios. Our extensive experiments across a range of discriminative and generative tasks showcase the superior performance of our method when compared to existing alternatives</abstract><doi>10.48550/arxiv.2312.16916</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2312.16916
ispartof
issn
language eng
recordid cdi_arxiv_primary_2312_16916
source arXiv.org
subjects Computer Science - Computer Vision and Pattern Recognition
title Res-Attn : An Enhanced Res-Tuning Approach with Lightweight Attention Mechanism
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-10T19%3A42%3A09IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Res-Attn%20:%20An%20Enhanced%20Res-Tuning%20Approach%20with%20Lightweight%20Attention%20Mechanism&rft.au=Mao,%20Chaojie&rft.date=2023-12-28&rft_id=info:doi/10.48550/arxiv.2312.16916&rft_dat=%3Carxiv_GOX%3E2312_16916%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true