When Your AIs Deceive You: Challenges of Partial Observability in Reinforcement Learning from Human Feedback
Past analyses of reinforcement learning from human feedback (RLHF) assume that the human evaluators fully observe the environment. What happens when human feedback is based only on partial observations? We formally define two failure cases: deceptive inflation and overjustification. Modeling the hum...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Past analyses of reinforcement learning from human feedback (RLHF) assume
that the human evaluators fully observe the environment. What happens when
human feedback is based only on partial observations? We formally define two
failure cases: deceptive inflation and overjustification. Modeling the human as
Boltzmann-rational w.r.t. a belief over trajectories, we prove conditions under
which RLHF is guaranteed to result in policies that deceptively inflate their
performance, overjustify their behavior to make an impression, or both. Under
the new assumption that the human's partial observability is known and
accounted for, we then analyze how much information the feedback process
provides about the return function. We show that sometimes, the human's
feedback determines the return function uniquely up to an additive constant,
but in other realistic cases, there is irreducible ambiguity. We propose
exploratory research directions to help tackle these challenges, experimentally
validate both the theoretical concerns and potential mitigations, and caution
against blindly applying RLHF in partially observable settings. |
---|---|
DOI: | 10.48550/arxiv.2402.17747 |