KernelWarehouse: Towards Parameter-Efficient Dynamic Convolution
Dynamic convolution learns a linear mixture of $n$ static kernels weighted with their sample-dependent attentions, demonstrating superior performance compared to normal convolution. However, existing designs are parameter-inefficient: they increase the number of convolutional parameters by $n$ times...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Li, Chao Yao, Anbang |
description | Dynamic convolution learns a linear mixture of $n$ static kernels weighted
with their sample-dependent attentions, demonstrating superior performance
compared to normal convolution. However, existing designs are
parameter-inefficient: they increase the number of convolutional parameters by
$n$ times. This and the optimization difficulty lead to no research progress in
dynamic convolution that can allow us to use a significant large value of $n$
(e.g., $n>100$ instead of typical setting $n |
doi_str_mv | 10.48550/arxiv.2308.08361 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2308_08361</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2308_08361</sourcerecordid><originalsourceid>FETCH-LOGICAL-a671-a3eac872b330ff73eb8060564d30dd12d91220ab3a1097926f37765abfb9871b3</originalsourceid><addsrcrecordid>eNotz7FOwzAUQFEvDKjwAUzkBxKe_RrbYQKFAhWVYIjEGD3HtrCU2MhJC_17RGG625UOY1ccqrWua7ih_B0OlUDQFWiU_Jzdvbgc3fhO2X2k_exuiy59UbZz8UaZJre4XG68D0NwcSkejpGmMBRtioc07peQ4gU78zTO7vK_K9Y9brr2udy9Pm3b-11JUvGS0NGglTCI4L1CZzRIqOXaIljLhW24EEAGiUOjGiE9KiVrMt40WnGDK3b9tz0R-s8cJsrH_pfSnyj4A0quQ8c</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>KernelWarehouse: Towards Parameter-Efficient Dynamic Convolution</title><source>arXiv.org</source><creator>Li, Chao ; Yao, Anbang</creator><creatorcontrib>Li, Chao ; Yao, Anbang</creatorcontrib><description>Dynamic convolution learns a linear mixture of $n$ static kernels weighted
with their sample-dependent attentions, demonstrating superior performance
compared to normal convolution. However, existing designs are
parameter-inefficient: they increase the number of convolutional parameters by
$n$ times. This and the optimization difficulty lead to no research progress in
dynamic convolution that can allow us to use a significant large value of $n$
(e.g., $n>100$ instead of typical setting $n<10$) to push forward the
performance boundary. In this paper, we propose $KernelWarehouse$, a more
general form of dynamic convolution, which can strike a favorable trade-off
between parameter efficiency and representation power. Its key idea is to
redefine the basic concepts of "$kernels$" and "$assembling$ $kernels$" in
dynamic convolution from the perspective of reducing kernel dimension and
increasing kernel number significantly. In principle, KernelWarehouse enhances
convolutional parameter dependencies within the same layer and across
successive layers via tactful kernel partition and warehouse sharing, yielding
a high degree of freedom to fit a desired parameter budget. We validate our
method on ImageNet and MS-COCO datasets with different ConvNet architectures,
and show that it attains state-of-the-art results. For instance, the
ResNet18|ResNet50|MobileNetV2|ConvNeXt-Tiny model trained with KernelWarehouse
on ImageNet reaches 76.05%|81.05%|75.52%|82.51% top-1 accuracy. Thanks to its
flexible design, KernelWarehouse can even reduce the model size of a ConvNet
while improving the accuracy, e.g., our ResNet18 model with 36.45%|65.10%
parameter reduction to the baseline shows 2.89%|2.29% absolute improvement to
top-1 accuracy.</description><identifier>DOI: 10.48550/arxiv.2308.08361</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition ; Computer Science - Learning</subject><creationdate>2023-08</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,778,883</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2308.08361$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2308.08361$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Li, Chao</creatorcontrib><creatorcontrib>Yao, Anbang</creatorcontrib><title>KernelWarehouse: Towards Parameter-Efficient Dynamic Convolution</title><description>Dynamic convolution learns a linear mixture of $n$ static kernels weighted
with their sample-dependent attentions, demonstrating superior performance
compared to normal convolution. However, existing designs are
parameter-inefficient: they increase the number of convolutional parameters by
$n$ times. This and the optimization difficulty lead to no research progress in
dynamic convolution that can allow us to use a significant large value of $n$
(e.g., $n>100$ instead of typical setting $n<10$) to push forward the
performance boundary. In this paper, we propose $KernelWarehouse$, a more
general form of dynamic convolution, which can strike a favorable trade-off
between parameter efficiency and representation power. Its key idea is to
redefine the basic concepts of "$kernels$" and "$assembling$ $kernels$" in
dynamic convolution from the perspective of reducing kernel dimension and
increasing kernel number significantly. In principle, KernelWarehouse enhances
convolutional parameter dependencies within the same layer and across
successive layers via tactful kernel partition and warehouse sharing, yielding
a high degree of freedom to fit a desired parameter budget. We validate our
method on ImageNet and MS-COCO datasets with different ConvNet architectures,
and show that it attains state-of-the-art results. For instance, the
ResNet18|ResNet50|MobileNetV2|ConvNeXt-Tiny model trained with KernelWarehouse
on ImageNet reaches 76.05%|81.05%|75.52%|82.51% top-1 accuracy. Thanks to its
flexible design, KernelWarehouse can even reduce the model size of a ConvNet
while improving the accuracy, e.g., our ResNet18 model with 36.45%|65.10%
parameter reduction to the baseline shows 2.89%|2.29% absolute improvement to
top-1 accuracy.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotz7FOwzAUQFEvDKjwAUzkBxKe_RrbYQKFAhWVYIjEGD3HtrCU2MhJC_17RGG625UOY1ccqrWua7ih_B0OlUDQFWiU_Jzdvbgc3fhO2X2k_exuiy59UbZz8UaZJre4XG68D0NwcSkejpGmMBRtioc07peQ4gU78zTO7vK_K9Y9brr2udy9Pm3b-11JUvGS0NGglTCI4L1CZzRIqOXaIljLhW24EEAGiUOjGiE9KiVrMt40WnGDK3b9tz0R-s8cJsrH_pfSnyj4A0quQ8c</recordid><startdate>20230816</startdate><enddate>20230816</enddate><creator>Li, Chao</creator><creator>Yao, Anbang</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20230816</creationdate><title>KernelWarehouse: Towards Parameter-Efficient Dynamic Convolution</title><author>Li, Chao ; Yao, Anbang</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a671-a3eac872b330ff73eb8060564d30dd12d91220ab3a1097926f37765abfb9871b3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Li, Chao</creatorcontrib><creatorcontrib>Yao, Anbang</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Li, Chao</au><au>Yao, Anbang</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>KernelWarehouse: Towards Parameter-Efficient Dynamic Convolution</atitle><date>2023-08-16</date><risdate>2023</risdate><abstract>Dynamic convolution learns a linear mixture of $n$ static kernels weighted
with their sample-dependent attentions, demonstrating superior performance
compared to normal convolution. However, existing designs are
parameter-inefficient: they increase the number of convolutional parameters by
$n$ times. This and the optimization difficulty lead to no research progress in
dynamic convolution that can allow us to use a significant large value of $n$
(e.g., $n>100$ instead of typical setting $n<10$) to push forward the
performance boundary. In this paper, we propose $KernelWarehouse$, a more
general form of dynamic convolution, which can strike a favorable trade-off
between parameter efficiency and representation power. Its key idea is to
redefine the basic concepts of "$kernels$" and "$assembling$ $kernels$" in
dynamic convolution from the perspective of reducing kernel dimension and
increasing kernel number significantly. In principle, KernelWarehouse enhances
convolutional parameter dependencies within the same layer and across
successive layers via tactful kernel partition and warehouse sharing, yielding
a high degree of freedom to fit a desired parameter budget. We validate our
method on ImageNet and MS-COCO datasets with different ConvNet architectures,
and show that it attains state-of-the-art results. For instance, the
ResNet18|ResNet50|MobileNetV2|ConvNeXt-Tiny model trained with KernelWarehouse
on ImageNet reaches 76.05%|81.05%|75.52%|82.51% top-1 accuracy. Thanks to its
flexible design, KernelWarehouse can even reduce the model size of a ConvNet
while improving the accuracy, e.g., our ResNet18 model with 36.45%|65.10%
parameter reduction to the baseline shows 2.89%|2.29% absolute improvement to
top-1 accuracy.</abstract><doi>10.48550/arxiv.2308.08361</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2308.08361 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2308_08361 |
source | arXiv.org |
subjects | Computer Science - Computer Vision and Pattern Recognition Computer Science - Learning |
title | KernelWarehouse: Towards Parameter-Efficient Dynamic Convolution |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-15T16%3A42%3A02IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=KernelWarehouse:%20Towards%20Parameter-Efficient%20Dynamic%20Convolution&rft.au=Li,%20Chao&rft.date=2023-08-16&rft_id=info:doi/10.48550/arxiv.2308.08361&rft_dat=%3Carxiv_GOX%3E2308_08361%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |