GliDe with a CaPE: A Low-Hassle Method to Accelerate Speculative Decoding
Speculative decoding is a relatively new decoding framework that leverages small and efficient draft models to reduce the latency of LLMs. In this study, we introduce GliDe and CaPE, two low-hassle modifications to vanilla speculative decoding to further improve the decoding speed of a frozen LLM. S...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Speculative decoding is a relatively new decoding framework that leverages
small and efficient draft models to reduce the latency of LLMs. In this study,
we introduce GliDe and CaPE, two low-hassle modifications to vanilla
speculative decoding to further improve the decoding speed of a frozen LLM.
Specifically, GliDe is a modified draft model architecture that reuses the
cached keys and values from the target LLM, while CaPE is a proposal expansion
method that uses the draft model's confidence scores to help select additional
candidate tokens for verification. Extensive experiments on different
benchmarks demonstrate that our proposed GliDe draft model significantly
reduces the expected decoding latency. Additional evaluation using walltime
reveals that GliDe can accelerate Vicuna models up to 2.17x and further extend
the improvement to 2.61x with CaPE. We will release our code, data, and the
trained draft models. |
---|---|
DOI: | 10.48550/arxiv.2402.02082 |