Multi-Task Reinforcement Learning with Mixture of Orthogonal Experts
Multi-Task Reinforcement Learning (MTRL) tackles the long-standing problem of endowing agents with skills that generalize across a variety of problems. To this end, sharing representations plays a fundamental role in capturing both unique and common characteristics of the tasks. Tasks may exhibit si...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Multi-Task Reinforcement Learning (MTRL) tackles the long-standing problem of
endowing agents with skills that generalize across a variety of problems. To
this end, sharing representations plays a fundamental role in capturing both
unique and common characteristics of the tasks. Tasks may exhibit similarities
in terms of skills, objects, or physical properties while leveraging their
representations eases the achievement of a universal policy. Nevertheless, the
pursuit of learning a shared set of diverse representations is still an open
challenge. In this paper, we introduce a novel approach for representation
learning in MTRL that encapsulates common structures among the tasks using
orthogonal representations to promote diversity. Our method, named Mixture Of
Orthogonal Experts (MOORE), leverages a Gram-Schmidt process to shape a shared
subspace of representations generated by a mixture of experts. When
task-specific information is provided, MOORE generates relevant representations
from this shared subspace. We assess the effectiveness of our approach on two
MTRL benchmarks, namely MiniGrid and MetaWorld, showing that MOORE surpasses
related baselines and establishes a new state-of-the-art result on MetaWorld. |
---|---|
DOI: | 10.48550/arxiv.2311.11385 |