Policy Architectures for Compositional Generalization in Control
Many tasks in control, robotics, and planning can be specified using desired goal configurations for various entities in the environment. Learning goal-conditioned policies is a natural paradigm to solve such tasks. However, current approaches struggle to learn and generalize as task complexity incr...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Many tasks in control, robotics, and planning can be specified using desired
goal configurations for various entities in the environment. Learning
goal-conditioned policies is a natural paradigm to solve such tasks. However,
current approaches struggle to learn and generalize as task complexity
increases, such as variations in number of environment entities or compositions
of goals. In this work, we introduce a framework for modeling entity-based
compositional structure in tasks, and create suitable policy designs that can
leverage this structure. Our policies, which utilize architectures like Deep
Sets and Self Attention, are flexible and can be trained end-to-end without
requiring any action primitives. When trained using standard reinforcement and
imitation learning methods on a suite of simulated robot manipulation tasks, we
find that these architectures achieve significantly higher success rates with
less data. We also find these architectures enable broader and compositional
generalization, producing policies that extrapolate to different numbers of
entities than seen in training, and stitch together (i.e. compose) learned
skills in novel ways. Videos of the results can be found at
https://sites.google.com/view/comp-gen-rl. |
---|---|
DOI: | 10.48550/arxiv.2203.05960 |