ExpandNets: Linear Over-parameterization to Train Compact Convolutional Networks
We introduce an approach to training a given compact network. To this end, we leverage over-parameterization, which typically improves both neural network optimization and generalization. Specifically, we propose to expand each linear layer of the compact network into multiple consecutive linear lay...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We introduce an approach to training a given compact network. To this end, we
leverage over-parameterization, which typically improves both neural network
optimization and generalization. Specifically, we propose to expand each linear
layer of the compact network into multiple consecutive linear layers, without
adding any nonlinearity. As such, the resulting expanded network, or ExpandNet,
can be contracted back to the compact one algebraically at inference. In
particular, we introduce two convolutional expansion strategies and demonstrate
their benefits on several tasks, including image classification, object
detection, and semantic segmentation. As evidenced by our experiments, our
approach outperforms both training the compact network from scratch and
performing knowledge distillation from a teacher. Furthermore, our linear
over-parameterization empirically reduces gradient confusion during training
and improves the network generalization. |
---|---|
DOI: | 10.48550/arxiv.1811.10495 |