SPIN: An Empirical Evaluation on Sharing Parameters of Isotropic Networks
Recent isotropic networks, such as ConvMixer and vision transformers, have found significant success across visual recognition tasks, matching or outperforming non-isotropic convolutional neural networks (CNNs). Isotropic architectures are particularly well-suited to cross-layer weight sharing, an e...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Recent isotropic networks, such as ConvMixer and vision transformers, have
found significant success across visual recognition tasks, matching or
outperforming non-isotropic convolutional neural networks (CNNs). Isotropic
architectures are particularly well-suited to cross-layer weight sharing, an
effective neural network compression technique. In this paper, we perform an
empirical evaluation on methods for sharing parameters in isotropic networks
(SPIN). We present a framework to formalize major weight sharing design
decisions and perform a comprehensive empirical evaluation of this design
space. Guided by our experimental results, we propose a weight sharing strategy
to generate a family of models with better overall efficiency, in terms of
FLOPs and parameters versus accuracy, compared to traditional scaling methods
alone, for example compressing ConvMixer by 1.9x while improving accuracy on
ImageNet. Finally, we perform a qualitative study to further understand the
behavior of weight sharing in isotropic architectures. The code is available at
https://github.com/apple/ml-spin. |
---|---|
DOI: | 10.48550/arxiv.2207.10237 |