Preferences Implicit in the State of the World
Reinforcement learning (RL) agents optimize only the features specified in a reward function and are indifferent to anything left out inadvertently. This means that we must not only specify what to do, but also the much larger space of what not to do. It is easy to forget these preferences, since th...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Reinforcement learning (RL) agents optimize only the features specified in a
reward function and are indifferent to anything left out inadvertently. This
means that we must not only specify what to do, but also the much larger space
of what not to do. It is easy to forget these preferences, since these
preferences are already satisfied in our environment. This motivates our key
insight: when a robot is deployed in an environment that humans act in, the
state of the environment is already optimized for what humans want. We can
therefore use this implicit preference information from the state to fill in
the blanks. We develop an algorithm based on Maximum Causal Entropy IRL and use
it to evaluate the idea in a suite of proof-of-concept environments designed to
show its properties. We find that information from the initial state can be
used to infer both side effects that should be avoided as well as preferences
for how the environment should be organized. Our code can be found at
https://github.com/HumanCompatibleAI/rlsp. |
---|---|
DOI: | 10.48550/arxiv.1902.04198 |