SEMI: Self-supervised Exploration via Multisensory Incongruity
Efficient exploration is a long-standing problem in reinforcement learning since extrinsic rewards are usually sparse or missing. A popular solution to this issue is to feed an agent with novelty signals as intrinsic rewards. In this work, we introduce SEMI, a self-supervised exploration policy by i...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Efficient exploration is a long-standing problem in reinforcement learning
since extrinsic rewards are usually sparse or missing. A popular solution to
this issue is to feed an agent with novelty signals as intrinsic rewards. In
this work, we introduce SEMI, a self-supervised exploration policy by
incentivizing the agent to maximize a new novelty signal: multisensory
incongruity, which can be measured in two aspects, perception incongruity and
action incongruity. The former represents the misalignment of the multisensory
inputs, while the latter represents the variance of an agent's policies under
different sensory inputs. Specifically, an alignment predictor is learned to
detect whether multiple sensory inputs are aligned, the error of which is used
to measure perception incongruity. A policy model takes different combinations
of the multisensory observations as input and outputs actions for exploration.
The variance of actions is further used to measure action incongruity. Using
both incongruities as intrinsic rewards, SEMI allows an agent to learn skills
by exploring in a self-supervised manner without any external rewards. We
further show that SEMI is compatible with extrinsic rewards and it improves
sample efficiency of policy learning. The effectiveness of SEMI is demonstrated
across a variety of benchmark environments including object manipulation and
audio-visual games. |
---|---|
DOI: | 10.48550/arxiv.2009.12494 |