Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback
Reinforcement learning from human feedback (RLHF) is a technique for training AI systems to align with human goals. RLHF has emerged as the central method used to finetune state-of-the-art large language models (LLMs). Despite this popularity, there has been relatively little public work systematizi...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Reinforcement learning from human feedback (RLHF) is a technique for training
AI systems to align with human goals. RLHF has emerged as the central method
used to finetune state-of-the-art large language models (LLMs). Despite this
popularity, there has been relatively little public work systematizing its
flaws. In this paper, we (1) survey open problems and fundamental limitations
of RLHF and related methods; (2) overview techniques to understand, improve,
and complement RLHF in practice; and (3) propose auditing and disclosure
standards to improve societal oversight of RLHF systems. Our work emphasizes
the limitations of RLHF and highlights the importance of a multi-faceted
approach to the development of safer AI systems. |
---|---|
DOI: | 10.48550/arxiv.2307.15217 |