A Decoding Acceleration Framework for Industrial Deployable LLM-based Recommender Systems
Recently, increasing attention has been paid to LLM-based recommender systems, but their deployment is still under exploration in the industry. Most deployments utilize LLMs as feature enhancers, generating augmentation knowledge in the offline stage. However, in recommendation scenarios, involving...
Gespeichert in:
Hauptverfasser: | , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Recently, increasing attention has been paid to LLM-based recommender
systems, but their deployment is still under exploration in the industry. Most
deployments utilize LLMs as feature enhancers, generating augmentation
knowledge in the offline stage. However, in recommendation scenarios, involving
numerous users and items, even offline generation with LLMs consumes
considerable time and resources. This generation inefficiency stems from the
autoregressive nature of LLMs, and a promising direction for acceleration is
speculative decoding, a Draft-then-Verify paradigm that increases the number of
generated tokens per decoding step. In this paper, we first identify that
recommendation knowledge generation is suitable for retrieval-based speculative
decoding. Then, we discern two characteristics: (1) extensive items and users
in RSs bring retrieval inefficiency, and (2) RSs exhibit high diversity
tolerance for text generated by LLMs. Based on the above insights, we propose a
Decoding Acceleration Framework for LLM-based Recommendation (dubbed DARE),
with Customized Retrieval Pool to improve retrieval efficiency and Relaxed
Verification to increase the acceptance rate of draft tokens, respectively.
Extensive experiments demonstrate that DARE achieves a 3-5x speedup and is
compatible with various frameworks and backbone LLMs. DARE has also been
deployed to online advertising scenarios within a large-scale commercial
environment, achieving a 3.45x speedup while maintaining the downstream
performance. |
---|---|
DOI: | 10.48550/arxiv.2408.05676 |