Decoding Matters: Addressing Amplification Bias and Homogeneity Issue for LLM-based Recommendation
Adapting Large Language Models (LLMs) for recommendation requires careful consideration of the decoding process, given the inherent differences between generating items and natural language. Existing approaches often directly apply LLMs' original decoding methods. However, we find these methods...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Adapting Large Language Models (LLMs) for recommendation requires careful
consideration of the decoding process, given the inherent differences between
generating items and natural language. Existing approaches often directly apply
LLMs' original decoding methods. However, we find these methods encounter
significant challenges: 1) amplification bias -- where standard length
normalization inflates scores for items containing tokens with generation
probabilities close to 1 (termed ghost tokens), and 2) homogeneity issue --
generating multiple similar or repetitive items for a user. To tackle these
challenges, we introduce a new decoding approach named Debiasing-Diversifying
Decoding (D3). D3 disables length normalization for ghost tokens to alleviate
amplification bias, and it incorporates a text-free assistant model to
encourage tokens less frequently generated by LLMs for counteracting
recommendation homogeneity. Extensive experiments on real-world datasets
demonstrate the method's effectiveness in enhancing accuracy and diversity. The
code is available at https://github.com/SAI990323/DecodingMatters. |
---|---|
DOI: | 10.48550/arxiv.2406.14900 |