Einconv: Exploring Unexplored Tensor Network Decompositions for Convolutional Neural Networks
Tensor decomposition methods are widely used for model compression and fast inference in convolutional neural networks (CNNs). Although many decompositions are conceivable, only CP decomposition and a few others have been applied in practice, and no extensive comparisons have been made between avail...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Tensor decomposition methods are widely used for model compression and fast
inference in convolutional neural networks (CNNs). Although many decompositions
are conceivable, only CP decomposition and a few others have been applied in
practice, and no extensive comparisons have been made between available
methods. Previous studies have not determined how many decompositions are
available, nor which of them is optimal. In this study, we first characterize a
decomposition class specific to CNNs by adopting a flexible graphical notation.
The class includes such well-known CNN modules as depthwise separable
convolution layers and bottleneck layers, but also previously unknown modules
with nonlinear activations. We also experimentally compare the tradeoff between
prediction accuracy and time/space complexity for modules found by enumerating
all possible decompositions, or by using a neural architecture search. We find
some nonlinear decompositions outperform existing ones. |
---|---|
DOI: | 10.48550/arxiv.1908.04471 |