Faster Neural Network Training with Approximate Tensor Operations
We propose a novel technique for faster deep neural network training which systematically applies sample-based approximation to the constituent tensor operations, i.e., matrix multiplications and convolutions. We introduce new sampling techniques, study their theoretical properties, and prove that t...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We propose a novel technique for faster deep neural network training which
systematically applies sample-based approximation to the constituent tensor
operations, i.e., matrix multiplications and convolutions. We introduce new
sampling techniques, study their theoretical properties, and prove that they
provide the same convergence guarantees when applied to SGD training. We apply
approximate tensor operations to single and multi-node training of MLP and CNN
networks on MNIST, CIFAR-10 and ImageNet datasets. We demonstrate up to 66%
reduction in the amount of computations and communication, and up to 1.37x
faster training time while maintaining negligible or no impact on the final
test accuracy. |
---|---|
DOI: | 10.48550/arxiv.1805.08079 |