LR-CNN: Lightweight Row-centric Convolutional Neural Network Training for Memory Reduction
In the last decade, Convolutional Neural Network with a multi-layer architecture has advanced rapidly. However, training its complex network is very space-consuming, since a lot of intermediate data are preserved across layers, especially when processing high-dimension inputs with a big batch size....
Gespeichert in:
Hauptverfasser: | , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In the last decade, Convolutional Neural Network with a multi-layer
architecture has advanced rapidly. However, training its complex network is
very space-consuming, since a lot of intermediate data are preserved across
layers, especially when processing high-dimension inputs with a big batch size.
That poses great challenges to the limited memory capacity of current
accelerators (e.g., GPUs). Existing efforts mitigate such bottleneck by
external auxiliary solutions with additional hardware costs, and internal
modifications with potential accuracy penalty. Differently, our analysis
reveals that computations intra- and inter-layers exhibit the spatial-temporal
weak dependency and even complete independency features. That inspires us to
break the traditional layer-by-layer (column) dataflow rule. Now operations are
novelly re-organized into rows throughout all convolution layers. This
lightweight design allows a majority of intermediate data to be removed without
any loss of accuracy. We particularly study the weak dependency between two
consecutive rows. For the resulting skewed memory consumption, we give two
solutions with different favorite scenarios. Evaluations on two representative
networks confirm the effectiveness. We also validate that our middle dataflow
optimization can be smoothly embraced by existing works for better memory
reduction. |
---|---|
DOI: | 10.48550/arxiv.2401.11471 |