RepGhost: A Hardware-Efficient Ghost Module via Re-parameterization
Feature reuse has been a key technique in light-weight convolutional neural networks (CNNs) architecture design. Current methods usually utilize a concatenation operator to keep large channel numbers cheaply (thus large network capacity) by reusing feature maps from other layers. Although concatenat...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Feature reuse has been a key technique in light-weight convolutional neural
networks (CNNs) architecture design. Current methods usually utilize a
concatenation operator to keep large channel numbers cheaply (thus large
network capacity) by reusing feature maps from other layers. Although
concatenation is parameters- and FLOPs-free, its computational cost on hardware
devices is non-negligible. To address this, this paper provides a new
perspective to realize feature reuse implicitly and more efficiently instead of
concatenation. A novel hardware-efficient RepGhost module is proposed for
implicit feature reuse via reparameterization, instead of using concatenation
operator. Based on the RepGhost module, we develop our efficient RepGhost
bottleneck and RepGhostNet. Experiments on ImageNet and COCO benchmarks
demonstrate that our RepGhostNet is much more effective and efficient than
GhostNet and MobileNetV3 on mobile devices. Specially, our RepGhostNet
surpasses GhostNet 0.5x by 2.5% Top-1 accuracy on ImageNet dataset with less
parameters and comparable latency on an ARM-based mobile device. Code and model
weights are available at https://github.com/ChengpengChen/RepGhost. |
---|---|
DOI: | 10.48550/arxiv.2211.06088 |