How Can Recommender Systems Benefit from Large Language Models: A Survey
With the rapid development of online services, recommender systems (RS) have become increasingly indispensable for mitigating information overload. Despite remarkable progress, conventional recommendation models (CRM) still have some limitations, e.g., lacking open-world knowledge, and difficulties...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | With the rapid development of online services, recommender systems (RS) have
become increasingly indispensable for mitigating information overload. Despite
remarkable progress, conventional recommendation models (CRM) still have some
limitations, e.g., lacking open-world knowledge, and difficulties in
comprehending users' underlying preferences and motivations. Meanwhile, large
language models (LLM) have shown impressive general intelligence and human-like
capabilities, which mainly stem from their extensive open-world knowledge,
reasoning ability, as well as their comprehension of human culture and society.
Consequently, the emergence of LLM is inspiring the design of recommender
systems and pointing out a promising research direction, i.e., whether we can
incorporate LLM and benefit from their knowledge and capabilities to compensate
for the limitations of CRM. In this paper, we conduct a comprehensive survey on
this research direction from the perspective of the whole pipeline in
real-world recommender systems. Specifically, we summarize existing works from
two orthogonal aspects: where and how to adapt LLM to RS. For the WHERE
question, we discuss the roles that LLM could play in different stages of the
recommendation pipeline, i.e., feature engineering, feature encoder,
scoring/ranking function, user interaction, and pipeline controller. For the
HOW question, we investigate the training and inference strategies, resulting
in two fine-grained taxonomy criteria, i.e., whether to tune LLM or not, and
whether to involve conventional recommendation models for inference. Then, we
highlight key challenges in adapting LLM to RS from three aspects, i.e.,
efficiency, effectiveness, and ethics. Finally, we summarize the survey and
discuss the future prospects. We actively maintain a GitHub repository for
papers and other related resources:
https://github.com/CHIANGEL/Awesome-LLM-for-RecSys/. |
---|---|
DOI: | 10.48550/arxiv.2306.05817 |