LUT-NN: Empower Efficient Neural Network Inference with Centroid Learning and Table Lookup
MobiCom 2023: Proceedings of the 29th Annual International Conference on Mobile Computing And Networking On-device Deep Neural Network (DNN) inference consumes significant computing resources and development efforts. To alleviate that, we propose LUT-NN, the first system to empower inference by tabl...
Gespeichert in:
Hauptverfasser: | , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | MobiCom 2023: Proceedings of the 29th Annual International
Conference on Mobile Computing And Networking On-device Deep Neural Network (DNN) inference consumes significant computing
resources and development efforts. To alleviate that, we propose LUT-NN, the
first system to empower inference by table lookup, to reduce inference cost.
LUT-NN learns the typical features for each operator, named centroid, and
precompute the results for these centroids to save in lookup tables. During
inference, the results of the closest centroids with the inputs can be read
directly from the table, as the approximated outputs without computations.
LUT-NN integrates two major novel techniques: (1) differentiable centroid
learning through backpropagation, which adapts three levels of approximation to
minimize the accuracy impact by centroids; (2) table lookup inference
execution, which comprehensively considers different levels of parallelism,
memory access reduction, and dedicated hardware units for optimal performance.
LUT-NN is evaluated on multiple real tasks, covering image and speech
recognition, and nature language processing. Compared to related work, LUT-NN
improves accuracy by 66% to 92%, achieving similar level with the original
models. LUT-NN reduces the cost at all dimensions, including FLOPs ($\leq$
16x), model size ($\leq$ 7x), latency ($\leq$ 6.8x), memory ($\leq$ 6.5x), and
power ($\leq$ 41.7%). |
---|---|
DOI: | 10.48550/arxiv.2302.03213 |