Multi-Task Reinforcement Learning with Mixture of Orthogonal Experts

Multi-Task Reinforcement Learning (MTRL) tackles the long-standing problem of endowing agents with skills that generalize across a variety of problems. To this end, sharing representations plays a fundamental role in capturing both unique and common characteristics of the tasks. Tasks may exhibit si...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Hendawy, Ahmed, Peters, Jan, D'Eramo, Carlo
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Hendawy, Ahmed
Peters, Jan
D'Eramo, Carlo
description Multi-Task Reinforcement Learning (MTRL) tackles the long-standing problem of endowing agents with skills that generalize across a variety of problems. To this end, sharing representations plays a fundamental role in capturing both unique and common characteristics of the tasks. Tasks may exhibit similarities in terms of skills, objects, or physical properties while leveraging their representations eases the achievement of a universal policy. Nevertheless, the pursuit of learning a shared set of diverse representations is still an open challenge. In this paper, we introduce a novel approach for representation learning in MTRL that encapsulates common structures among the tasks using orthogonal representations to promote diversity. Our method, named Mixture Of Orthogonal Experts (MOORE), leverages a Gram-Schmidt process to shape a shared subspace of representations generated by a mixture of experts. When task-specific information is provided, MOORE generates relevant representations from this shared subspace. We assess the effectiveness of our approach on two MTRL benchmarks, namely MiniGrid and MetaWorld, showing that MOORE surpasses related baselines and establishes a new state-of-the-art result on MetaWorld.
doi_str_mv 10.48550/arxiv.2311.11385
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2311_11385</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2311_11385</sourcerecordid><originalsourceid>FETCH-LOGICAL-a675-5687bd6553c4b32a771ef2f3a1f971ae5e66b855c244db1e228e301df929138e3</originalsourceid><addsrcrecordid>eNotz71OwzAUhmEvDKhwAUz4BhJy7NhOxqqUHylVJZQ9OkmOW6upUzkuhLunFKZv-_Q-jD1AluaFUtkThtl9pkICpACyULfseXMeoktqnA78g5y3Y-joSD7yijB453f8y8U937g5ngPx0fJtiPtxN3oc-Ho-UYjTHbuxOEx0_78LVr-s69VbUm1f31fLKkFtVKJ0YdpeKyW7vJUCjQGywkoEWxpAUqR1e8nsRJ73LZAQBckMeluK8hJLcsEe_26vjOYU3BHDd_PLaa4c-QNqkkUc</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Multi-Task Reinforcement Learning with Mixture of Orthogonal Experts</title><source>arXiv.org</source><creator>Hendawy, Ahmed ; Peters, Jan ; D'Eramo, Carlo</creator><creatorcontrib>Hendawy, Ahmed ; Peters, Jan ; D'Eramo, Carlo</creatorcontrib><description>Multi-Task Reinforcement Learning (MTRL) tackles the long-standing problem of endowing agents with skills that generalize across a variety of problems. To this end, sharing representations plays a fundamental role in capturing both unique and common characteristics of the tasks. Tasks may exhibit similarities in terms of skills, objects, or physical properties while leveraging their representations eases the achievement of a universal policy. Nevertheless, the pursuit of learning a shared set of diverse representations is still an open challenge. In this paper, we introduce a novel approach for representation learning in MTRL that encapsulates common structures among the tasks using orthogonal representations to promote diversity. Our method, named Mixture Of Orthogonal Experts (MOORE), leverages a Gram-Schmidt process to shape a shared subspace of representations generated by a mixture of experts. When task-specific information is provided, MOORE generates relevant representations from this shared subspace. We assess the effectiveness of our approach on two MTRL benchmarks, namely MiniGrid and MetaWorld, showing that MOORE surpasses related baselines and establishes a new state-of-the-art result on MetaWorld.</description><identifier>DOI: 10.48550/arxiv.2311.11385</identifier><language>eng</language><subject>Computer Science - Learning</subject><creationdate>2023-11</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2311.11385$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2311.11385$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Hendawy, Ahmed</creatorcontrib><creatorcontrib>Peters, Jan</creatorcontrib><creatorcontrib>D'Eramo, Carlo</creatorcontrib><title>Multi-Task Reinforcement Learning with Mixture of Orthogonal Experts</title><description>Multi-Task Reinforcement Learning (MTRL) tackles the long-standing problem of endowing agents with skills that generalize across a variety of problems. To this end, sharing representations plays a fundamental role in capturing both unique and common characteristics of the tasks. Tasks may exhibit similarities in terms of skills, objects, or physical properties while leveraging their representations eases the achievement of a universal policy. Nevertheless, the pursuit of learning a shared set of diverse representations is still an open challenge. In this paper, we introduce a novel approach for representation learning in MTRL that encapsulates common structures among the tasks using orthogonal representations to promote diversity. Our method, named Mixture Of Orthogonal Experts (MOORE), leverages a Gram-Schmidt process to shape a shared subspace of representations generated by a mixture of experts. When task-specific information is provided, MOORE generates relevant representations from this shared subspace. We assess the effectiveness of our approach on two MTRL benchmarks, namely MiniGrid and MetaWorld, showing that MOORE surpasses related baselines and establishes a new state-of-the-art result on MetaWorld.</description><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotz71OwzAUhmEvDKhwAUz4BhJy7NhOxqqUHylVJZQ9OkmOW6upUzkuhLunFKZv-_Q-jD1AluaFUtkThtl9pkICpACyULfseXMeoktqnA78g5y3Y-joSD7yijB453f8y8U937g5ngPx0fJtiPtxN3oc-Ho-UYjTHbuxOEx0_78LVr-s69VbUm1f31fLKkFtVKJ0YdpeKyW7vJUCjQGywkoEWxpAUqR1e8nsRJ73LZAQBckMeluK8hJLcsEe_26vjOYU3BHDd_PLaa4c-QNqkkUc</recordid><startdate>20231119</startdate><enddate>20231119</enddate><creator>Hendawy, Ahmed</creator><creator>Peters, Jan</creator><creator>D'Eramo, Carlo</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20231119</creationdate><title>Multi-Task Reinforcement Learning with Mixture of Orthogonal Experts</title><author>Hendawy, Ahmed ; Peters, Jan ; D'Eramo, Carlo</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a675-5687bd6553c4b32a771ef2f3a1f971ae5e66b855c244db1e228e301df929138e3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Hendawy, Ahmed</creatorcontrib><creatorcontrib>Peters, Jan</creatorcontrib><creatorcontrib>D'Eramo, Carlo</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Hendawy, Ahmed</au><au>Peters, Jan</au><au>D'Eramo, Carlo</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Multi-Task Reinforcement Learning with Mixture of Orthogonal Experts</atitle><date>2023-11-19</date><risdate>2023</risdate><abstract>Multi-Task Reinforcement Learning (MTRL) tackles the long-standing problem of endowing agents with skills that generalize across a variety of problems. To this end, sharing representations plays a fundamental role in capturing both unique and common characteristics of the tasks. Tasks may exhibit similarities in terms of skills, objects, or physical properties while leveraging their representations eases the achievement of a universal policy. Nevertheless, the pursuit of learning a shared set of diverse representations is still an open challenge. In this paper, we introduce a novel approach for representation learning in MTRL that encapsulates common structures among the tasks using orthogonal representations to promote diversity. Our method, named Mixture Of Orthogonal Experts (MOORE), leverages a Gram-Schmidt process to shape a shared subspace of representations generated by a mixture of experts. When task-specific information is provided, MOORE generates relevant representations from this shared subspace. We assess the effectiveness of our approach on two MTRL benchmarks, namely MiniGrid and MetaWorld, showing that MOORE surpasses related baselines and establishes a new state-of-the-art result on MetaWorld.</abstract><doi>10.48550/arxiv.2311.11385</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2311.11385
ispartof
issn
language eng
recordid cdi_arxiv_primary_2311_11385
source arXiv.org
subjects Computer Science - Learning
title Multi-Task Reinforcement Learning with Mixture of Orthogonal Experts
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-27T21%3A14%3A40IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Multi-Task%20Reinforcement%20Learning%20with%20Mixture%20of%20Orthogonal%20Experts&rft.au=Hendawy,%20Ahmed&rft.date=2023-11-19&rft_id=info:doi/10.48550/arxiv.2311.11385&rft_dat=%3Carxiv_GOX%3E2311_11385%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true