SLMRec: Distilling Large Language Models into Small for Sequential Recommendation
Sequential Recommendation (SR) task involves predicting the next item a user is likely to interact with, given their past interactions. The SR models examine the sequence of a user's actions to discern more complex behavioral patterns and temporal dynamics. Recent research demonstrates the grea...
Gespeichert in:
Hauptverfasser: | , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Sequential Recommendation (SR) task involves predicting the next item a user
is likely to interact with, given their past interactions. The SR models
examine the sequence of a user's actions to discern more complex behavioral
patterns and temporal dynamics. Recent research demonstrates the great impact
of LLMs on sequential recommendation systems, either viewing sequential
recommendation as language modeling or serving as the backbone for user
representation. Although these methods deliver outstanding performance, there
is scant evidence of the necessity of a large language model and how large the
language model is needed, especially in the sequential recommendation scene.
Meanwhile, due to the huge size of LLMs, it is inefficient and impractical to
apply a LLM-based model in real-world platforms that often need to process
billions of traffic logs daily. In this paper, we explore the influence of
LLMs' depth by conducting extensive experiments on large-scale industry
datasets. Surprisingly, our motivational experiments reveal that most
intermediate layers of LLMs are redundant, indicating that pruning the
remaining layers can still maintain strong performance. Motivated by this
insight, we empower small language models for SR, namely SLMRec, which adopt a
simple yet effective knowledge distillation method. Moreover, SLMRec is
orthogonal to other post-training efficiency techniques, such as quantization
and pruning, so that they can be leveraged in combination. Comprehensive
experimental results illustrate that the proposed SLMRec model attains the best
performance using only 13% of the parameters found in LLM-based recommendation
models while simultaneously achieving up to 6.6x and 8.0x speedups in training
and inference time costs, respectively. Besides, we provide a theoretical
justification for why small language models can perform comparably to large
language models in SR. |
---|---|
DOI: | 10.48550/arxiv.2405.17890 |