Efficient LLM Inference using Dynamic Input Pruning and Cache-Aware Masking
While mobile devices provide ever more compute power, improvements in DRAM bandwidth are much slower. This is unfortunate for large language model (LLM) token generation, which is heavily memory-bound. Previous work has proposed to leverage natural dynamic activation sparsity in ReLU-activated LLMs...
Gespeichert in:
Hauptverfasser: | , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | While mobile devices provide ever more compute power, improvements in DRAM
bandwidth are much slower. This is unfortunate for large language model (LLM)
token generation, which is heavily memory-bound. Previous work has proposed to
leverage natural dynamic activation sparsity in ReLU-activated LLMs to reduce
effective DRAM bandwidth per token. However, more recent LLMs use SwiGLU
instead of ReLU, which result in little inherent sparsity. While SwiGLU
activations can be pruned based on magnitude, the resulting sparsity patterns
are difficult to predict, rendering previous approaches ineffective. To
circumvent this issue, our work introduces Dynamic Input Pruning (DIP): a
predictor-free dynamic sparsification approach, which preserves accuracy with
minimal fine-tuning. DIP can further use lightweight LoRA adapters to regain
some performance lost during sparsification. Lastly, we describe a novel
cache-aware masking strategy, which considers the cache state and activation
magnitude to further increase cache hit rate, improving LLM token rate on
mobile devices. DIP outperforms other methods in terms of accuracy, memory and
throughput trade-offs across simulated hardware settings. On Phi-3-Medium, DIP
achieves a 46% reduction in memory and 40% increase in throughput with $ |
---|---|
DOI: | 10.48550/arxiv.2412.01380 |