RLPF: Reinforcement Learning from Prediction Feedback for User Summarization with LLMs
LLM-powered personalization agent systems employ Large Language Models (LLMs) to predict users' behavior from their past activities. However, their effectiveness often hinges on the ability to effectively leverage extensive, long user historical data due to its inherent noise and length of such...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | LLM-powered personalization agent systems employ Large Language Models (LLMs)
to predict users' behavior from their past activities. However, their
effectiveness often hinges on the ability to effectively leverage extensive,
long user historical data due to its inherent noise and length of such data.
Existing pretrained LLMs may generate summaries that are concise but lack the
necessary context for downstream tasks, hindering their utility in
personalization systems. To address these challenges, we introduce
Reinforcement Learning from Prediction Feedback (RLPF). RLPF fine-tunes LLMs to
generate concise, human-readable user summaries that are optimized for
downstream task performance. By maximizing the usefulness of the generated
summaries, RLPF effectively distills extensive user history data while
preserving essential information for downstream tasks. Our empirical evaluation
demonstrates significant improvements in both extrinsic downstream task utility
and intrinsic summary quality, surpassing baseline methods by up to 22% on
downstream task performance and achieving an up to 84.59% win rate on
Factuality, Abstractiveness, and Readability. RLPF also achieves a remarkable
74% reduction in context length while improving performance on 16 out of 19
unseen tasks and/or datasets, showcasing its generalizability. This approach
offers a promising solution for enhancing LLM personalization by effectively
transforming long, noisy user histories into informative and human-readable
representations. |
---|---|
DOI: | 10.48550/arxiv.2409.04421 |