Green DetNet: Computation and Memory efficient DetNet using Smart Compression and Training
This paper introduces an incremental training framework for compressing popular Deep Neural Network (DNN) based unfolded multiple-input-multiple-output (MIMO) detection algorithms like DetNet. The idea of incremental training is explored to select the optimal depth while training. To reduce the comp...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | This paper introduces an incremental training framework for compressing
popular Deep Neural Network (DNN) based unfolded multiple-input-multiple-output
(MIMO) detection algorithms like DetNet. The idea of incremental training is
explored to select the optimal depth while training. To reduce the computation
requirements or the number of FLoating point OPerations (FLOPs) and enforce
sparsity in weights, the concept of structured regularization is explored using
group LASSO and sparse group LASSO. Our methods lead to an astounding $98.9\%$
reduction in memory requirement and $81.63\%$ reduction in FLOPs when compared
with DetNet without compromising on BER performance. |
---|---|
DOI: | 10.48550/arxiv.2003.09446 |