RepGhost: A Hardware-Efficient Ghost Module via Re-parameterization

Feature reuse has been a key technique in light-weight convolutional neural networks (CNNs) architecture design. Current methods usually utilize a concatenation operator to keep large channel numbers cheaply (thus large network capacity) by reusing feature maps from other layers. Although concatenat...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Chen, Chengpeng, Guo, Zichao, Zeng, Haien, Xiong, Pengfei, Dong, Jian
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Chen, Chengpeng
Guo, Zichao
Zeng, Haien
Xiong, Pengfei
Dong, Jian
description Feature reuse has been a key technique in light-weight convolutional neural networks (CNNs) architecture design. Current methods usually utilize a concatenation operator to keep large channel numbers cheaply (thus large network capacity) by reusing feature maps from other layers. Although concatenation is parameters- and FLOPs-free, its computational cost on hardware devices is non-negligible. To address this, this paper provides a new perspective to realize feature reuse implicitly and more efficiently instead of concatenation. A novel hardware-efficient RepGhost module is proposed for implicit feature reuse via reparameterization, instead of using concatenation operator. Based on the RepGhost module, we develop our efficient RepGhost bottleneck and RepGhostNet. Experiments on ImageNet and COCO benchmarks demonstrate that our RepGhostNet is much more effective and efficient than GhostNet and MobileNetV3 on mobile devices. Specially, our RepGhostNet surpasses GhostNet 0.5x by 2.5% Top-1 accuracy on ImageNet dataset with less parameters and comparable latency on an ARM-based mobile device. Code and model weights are available at https://github.com/ChengpengChen/RepGhost.
doi_str_mv 10.48550/arxiv.2211.06088
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2211_06088</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2211_06088</sourcerecordid><originalsourceid>FETCH-LOGICAL-a678-ac783594613f08ec18c8a87d21bbd8f94004ea92d78d6b37e98b29ac0b3c8ebf3</originalsourceid><addsrcrecordid>eNotz81KAzEUhuFsXEj1AlyZG8iYn5nJibsy1FaoCKX74SQ5wUDbGdKx_ly9OLr6Fi988DB2p2RVQ9PIByyf-VJprVQlWwlwzbodjeu34Tw98iXfYIkfWEisUsoh02nic-MvQ3w_EL9k5DsSIxY80kQlf-OUh9MNu0p4ONPt_y7Y_mm17zZi-7p-7pZbga0FgcGCaVzdKpMkUFAQAMFGrbyPkFwtZU3odLQQW28sOfDaYZDeBCCfzILd_93Oin4s-Yjlq__V9LPG_ACu_0TZ</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>RepGhost: A Hardware-Efficient Ghost Module via Re-parameterization</title><source>arXiv.org</source><creator>Chen, Chengpeng ; Guo, Zichao ; Zeng, Haien ; Xiong, Pengfei ; Dong, Jian</creator><creatorcontrib>Chen, Chengpeng ; Guo, Zichao ; Zeng, Haien ; Xiong, Pengfei ; Dong, Jian</creatorcontrib><description>Feature reuse has been a key technique in light-weight convolutional neural networks (CNNs) architecture design. Current methods usually utilize a concatenation operator to keep large channel numbers cheaply (thus large network capacity) by reusing feature maps from other layers. Although concatenation is parameters- and FLOPs-free, its computational cost on hardware devices is non-negligible. To address this, this paper provides a new perspective to realize feature reuse implicitly and more efficiently instead of concatenation. A novel hardware-efficient RepGhost module is proposed for implicit feature reuse via reparameterization, instead of using concatenation operator. Based on the RepGhost module, we develop our efficient RepGhost bottleneck and RepGhostNet. Experiments on ImageNet and COCO benchmarks demonstrate that our RepGhostNet is much more effective and efficient than GhostNet and MobileNetV3 on mobile devices. Specially, our RepGhostNet surpasses GhostNet 0.5x by 2.5% Top-1 accuracy on ImageNet dataset with less parameters and comparable latency on an ARM-based mobile device. Code and model weights are available at https://github.com/ChengpengChen/RepGhost.</description><identifier>DOI: 10.48550/arxiv.2211.06088</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition</subject><creationdate>2022-11</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2211.06088$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2211.06088$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Chen, Chengpeng</creatorcontrib><creatorcontrib>Guo, Zichao</creatorcontrib><creatorcontrib>Zeng, Haien</creatorcontrib><creatorcontrib>Xiong, Pengfei</creatorcontrib><creatorcontrib>Dong, Jian</creatorcontrib><title>RepGhost: A Hardware-Efficient Ghost Module via Re-parameterization</title><description>Feature reuse has been a key technique in light-weight convolutional neural networks (CNNs) architecture design. Current methods usually utilize a concatenation operator to keep large channel numbers cheaply (thus large network capacity) by reusing feature maps from other layers. Although concatenation is parameters- and FLOPs-free, its computational cost on hardware devices is non-negligible. To address this, this paper provides a new perspective to realize feature reuse implicitly and more efficiently instead of concatenation. A novel hardware-efficient RepGhost module is proposed for implicit feature reuse via reparameterization, instead of using concatenation operator. Based on the RepGhost module, we develop our efficient RepGhost bottleneck and RepGhostNet. Experiments on ImageNet and COCO benchmarks demonstrate that our RepGhostNet is much more effective and efficient than GhostNet and MobileNetV3 on mobile devices. Specially, our RepGhostNet surpasses GhostNet 0.5x by 2.5% Top-1 accuracy on ImageNet dataset with less parameters and comparable latency on an ARM-based mobile device. Code and model weights are available at https://github.com/ChengpengChen/RepGhost.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotz81KAzEUhuFsXEj1AlyZG8iYn5nJibsy1FaoCKX74SQ5wUDbGdKx_ly9OLr6Fi988DB2p2RVQ9PIByyf-VJprVQlWwlwzbodjeu34Tw98iXfYIkfWEisUsoh02nic-MvQ3w_EL9k5DsSIxY80kQlf-OUh9MNu0p4ONPt_y7Y_mm17zZi-7p-7pZbga0FgcGCaVzdKpMkUFAQAMFGrbyPkFwtZU3odLQQW28sOfDaYZDeBCCfzILd_93Oin4s-Yjlq__V9LPG_ACu_0TZ</recordid><startdate>20221111</startdate><enddate>20221111</enddate><creator>Chen, Chengpeng</creator><creator>Guo, Zichao</creator><creator>Zeng, Haien</creator><creator>Xiong, Pengfei</creator><creator>Dong, Jian</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20221111</creationdate><title>RepGhost: A Hardware-Efficient Ghost Module via Re-parameterization</title><author>Chen, Chengpeng ; Guo, Zichao ; Zeng, Haien ; Xiong, Pengfei ; Dong, Jian</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a678-ac783594613f08ec18c8a87d21bbd8f94004ea92d78d6b37e98b29ac0b3c8ebf3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Chen, Chengpeng</creatorcontrib><creatorcontrib>Guo, Zichao</creatorcontrib><creatorcontrib>Zeng, Haien</creatorcontrib><creatorcontrib>Xiong, Pengfei</creatorcontrib><creatorcontrib>Dong, Jian</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Chen, Chengpeng</au><au>Guo, Zichao</au><au>Zeng, Haien</au><au>Xiong, Pengfei</au><au>Dong, Jian</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>RepGhost: A Hardware-Efficient Ghost Module via Re-parameterization</atitle><date>2022-11-11</date><risdate>2022</risdate><abstract>Feature reuse has been a key technique in light-weight convolutional neural networks (CNNs) architecture design. Current methods usually utilize a concatenation operator to keep large channel numbers cheaply (thus large network capacity) by reusing feature maps from other layers. Although concatenation is parameters- and FLOPs-free, its computational cost on hardware devices is non-negligible. To address this, this paper provides a new perspective to realize feature reuse implicitly and more efficiently instead of concatenation. A novel hardware-efficient RepGhost module is proposed for implicit feature reuse via reparameterization, instead of using concatenation operator. Based on the RepGhost module, we develop our efficient RepGhost bottleneck and RepGhostNet. Experiments on ImageNet and COCO benchmarks demonstrate that our RepGhostNet is much more effective and efficient than GhostNet and MobileNetV3 on mobile devices. Specially, our RepGhostNet surpasses GhostNet 0.5x by 2.5% Top-1 accuracy on ImageNet dataset with less parameters and comparable latency on an ARM-based mobile device. Code and model weights are available at https://github.com/ChengpengChen/RepGhost.</abstract><doi>10.48550/arxiv.2211.06088</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2211.06088
ispartof
issn
language eng
recordid cdi_arxiv_primary_2211_06088
source arXiv.org
subjects Computer Science - Computer Vision and Pattern Recognition
title RepGhost: A Hardware-Efficient Ghost Module via Re-parameterization
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-29T00%3A13%3A28IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=RepGhost:%20A%20Hardware-Efficient%20Ghost%20Module%20via%20Re-parameterization&rft.au=Chen,%20Chengpeng&rft.date=2022-11-11&rft_id=info:doi/10.48550/arxiv.2211.06088&rft_dat=%3Carxiv_GOX%3E2211_06088%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true