Foundations for Transfer in Reinforcement Learning: A Taxonomy of Knowledge Modalities
Contemporary artificial intelligence systems exhibit rapidly growing abilities accompanied by the growth of required resources, expansive datasets and corresponding investments into computing infrastructure. Although earlier successes predominantly focus on constrained settings, recent strides in fu...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Wulfmeier, Markus Byravan, Arunkumar Bechtle, Sarah Hausman, Karol Heess, Nicolas |
description | Contemporary artificial intelligence systems exhibit rapidly growing
abilities accompanied by the growth of required resources, expansive datasets
and corresponding investments into computing infrastructure. Although earlier
successes predominantly focus on constrained settings, recent strides in
fundamental research and applications aspire to create increasingly general
systems. This evolving landscape presents a dual panorama of opportunities and
challenges in refining the generalisation and transfer of knowledge - the
extraction from existing sources and adaptation as a comprehensive foundation
for tackling new problems. Within the domain of reinforcement learning (RL),
the representation of knowledge manifests through various modalities, including
dynamics and reward models, value functions, policies, and the original data.
This taxonomy systematically targets these modalities and frames its discussion
based on their inherent properties and alignment with different objectives and
mechanisms for transfer. Where possible, we aim to provide coarse guidance
delineating approaches which address requirements such as limiting environment
interactions, maximising computational efficiency, and enhancing generalisation
across varying axes of change. Finally, we analyse reasons contributing to the
prevalence or scarcity of specific forms of transfer, the inherent potential
behind pushing these frontiers, and underscore the significance of
transitioning from designed to learned transfer. |
doi_str_mv | 10.48550/arxiv.2312.01939 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2312_01939</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2312_01939</sourcerecordid><originalsourceid>FETCH-LOGICAL-a679-79ee9409de9bfeef70352629bc1b0519bfcbac696c029f2bc5b06c8c088f7f993</originalsourceid><addsrcrecordid>eNotz09LwzAcxvFcPMj0BXgyb6A1f5a0v93GcCpWBCm7liT9ZQTaRNKq27t3zp0e-B4e-BByx1m5rJViDyYfwncpJBcl4yDhmuy26Sv2Zg4pTtSnTNts4uQx0xDpB4Z4ag5HjDNt0OQY4n5F17Q1hxTTeKTJ09eYfgbs90jfUm-GMAecbsiVN8OEt5ddkHb72G6ei-b96WWzbgqjKygqQIQlgx7BekRfMamEFmAdt0zxU3TWOA3aMQFeWKcs0652rK595QHkgtz_355h3WcOo8nH7g_YnYHyF6bHTJg</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Foundations for Transfer in Reinforcement Learning: A Taxonomy of Knowledge Modalities</title><source>arXiv.org</source><creator>Wulfmeier, Markus ; Byravan, Arunkumar ; Bechtle, Sarah ; Hausman, Karol ; Heess, Nicolas</creator><creatorcontrib>Wulfmeier, Markus ; Byravan, Arunkumar ; Bechtle, Sarah ; Hausman, Karol ; Heess, Nicolas</creatorcontrib><description>Contemporary artificial intelligence systems exhibit rapidly growing
abilities accompanied by the growth of required resources, expansive datasets
and corresponding investments into computing infrastructure. Although earlier
successes predominantly focus on constrained settings, recent strides in
fundamental research and applications aspire to create increasingly general
systems. This evolving landscape presents a dual panorama of opportunities and
challenges in refining the generalisation and transfer of knowledge - the
extraction from existing sources and adaptation as a comprehensive foundation
for tackling new problems. Within the domain of reinforcement learning (RL),
the representation of knowledge manifests through various modalities, including
dynamics and reward models, value functions, policies, and the original data.
This taxonomy systematically targets these modalities and frames its discussion
based on their inherent properties and alignment with different objectives and
mechanisms for transfer. Where possible, we aim to provide coarse guidance
delineating approaches which address requirements such as limiting environment
interactions, maximising computational efficiency, and enhancing generalisation
across varying axes of change. Finally, we analyse reasons contributing to the
prevalence or scarcity of specific forms of transfer, the inherent potential
behind pushing these frontiers, and underscore the significance of
transitioning from designed to learned transfer.</description><identifier>DOI: 10.48550/arxiv.2312.01939</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Learning ; Computer Science - Robotics ; Statistics - Machine Learning</subject><creationdate>2023-12</creationdate><rights>http://creativecommons.org/licenses/by-nc-sa/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2312.01939$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2312.01939$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Wulfmeier, Markus</creatorcontrib><creatorcontrib>Byravan, Arunkumar</creatorcontrib><creatorcontrib>Bechtle, Sarah</creatorcontrib><creatorcontrib>Hausman, Karol</creatorcontrib><creatorcontrib>Heess, Nicolas</creatorcontrib><title>Foundations for Transfer in Reinforcement Learning: A Taxonomy of Knowledge Modalities</title><description>Contemporary artificial intelligence systems exhibit rapidly growing
abilities accompanied by the growth of required resources, expansive datasets
and corresponding investments into computing infrastructure. Although earlier
successes predominantly focus on constrained settings, recent strides in
fundamental research and applications aspire to create increasingly general
systems. This evolving landscape presents a dual panorama of opportunities and
challenges in refining the generalisation and transfer of knowledge - the
extraction from existing sources and adaptation as a comprehensive foundation
for tackling new problems. Within the domain of reinforcement learning (RL),
the representation of knowledge manifests through various modalities, including
dynamics and reward models, value functions, policies, and the original data.
This taxonomy systematically targets these modalities and frames its discussion
based on their inherent properties and alignment with different objectives and
mechanisms for transfer. Where possible, we aim to provide coarse guidance
delineating approaches which address requirements such as limiting environment
interactions, maximising computational efficiency, and enhancing generalisation
across varying axes of change. Finally, we analyse reasons contributing to the
prevalence or scarcity of specific forms of transfer, the inherent potential
behind pushing these frontiers, and underscore the significance of
transitioning from designed to learned transfer.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Learning</subject><subject>Computer Science - Robotics</subject><subject>Statistics - Machine Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotz09LwzAcxvFcPMj0BXgyb6A1f5a0v93GcCpWBCm7liT9ZQTaRNKq27t3zp0e-B4e-BByx1m5rJViDyYfwncpJBcl4yDhmuy26Sv2Zg4pTtSnTNts4uQx0xDpB4Z4ag5HjDNt0OQY4n5F17Q1hxTTeKTJ09eYfgbs90jfUm-GMAecbsiVN8OEt5ddkHb72G6ei-b96WWzbgqjKygqQIQlgx7BekRfMamEFmAdt0zxU3TWOA3aMQFeWKcs0652rK595QHkgtz_355h3WcOo8nH7g_YnYHyF6bHTJg</recordid><startdate>20231204</startdate><enddate>20231204</enddate><creator>Wulfmeier, Markus</creator><creator>Byravan, Arunkumar</creator><creator>Bechtle, Sarah</creator><creator>Hausman, Karol</creator><creator>Heess, Nicolas</creator><scope>AKY</scope><scope>EPD</scope><scope>GOX</scope></search><sort><creationdate>20231204</creationdate><title>Foundations for Transfer in Reinforcement Learning: A Taxonomy of Knowledge Modalities</title><author>Wulfmeier, Markus ; Byravan, Arunkumar ; Bechtle, Sarah ; Hausman, Karol ; Heess, Nicolas</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a679-79ee9409de9bfeef70352629bc1b0519bfcbac696c029f2bc5b06c8c088f7f993</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Learning</topic><topic>Computer Science - Robotics</topic><topic>Statistics - Machine Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Wulfmeier, Markus</creatorcontrib><creatorcontrib>Byravan, Arunkumar</creatorcontrib><creatorcontrib>Bechtle, Sarah</creatorcontrib><creatorcontrib>Hausman, Karol</creatorcontrib><creatorcontrib>Heess, Nicolas</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv Statistics</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Wulfmeier, Markus</au><au>Byravan, Arunkumar</au><au>Bechtle, Sarah</au><au>Hausman, Karol</au><au>Heess, Nicolas</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Foundations for Transfer in Reinforcement Learning: A Taxonomy of Knowledge Modalities</atitle><date>2023-12-04</date><risdate>2023</risdate><abstract>Contemporary artificial intelligence systems exhibit rapidly growing
abilities accompanied by the growth of required resources, expansive datasets
and corresponding investments into computing infrastructure. Although earlier
successes predominantly focus on constrained settings, recent strides in
fundamental research and applications aspire to create increasingly general
systems. This evolving landscape presents a dual panorama of opportunities and
challenges in refining the generalisation and transfer of knowledge - the
extraction from existing sources and adaptation as a comprehensive foundation
for tackling new problems. Within the domain of reinforcement learning (RL),
the representation of knowledge manifests through various modalities, including
dynamics and reward models, value functions, policies, and the original data.
This taxonomy systematically targets these modalities and frames its discussion
based on their inherent properties and alignment with different objectives and
mechanisms for transfer. Where possible, we aim to provide coarse guidance
delineating approaches which address requirements such as limiting environment
interactions, maximising computational efficiency, and enhancing generalisation
across varying axes of change. Finally, we analyse reasons contributing to the
prevalence or scarcity of specific forms of transfer, the inherent potential
behind pushing these frontiers, and underscore the significance of
transitioning from designed to learned transfer.</abstract><doi>10.48550/arxiv.2312.01939</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2312.01939 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2312_01939 |
source | arXiv.org |
subjects | Computer Science - Artificial Intelligence Computer Science - Learning Computer Science - Robotics Statistics - Machine Learning |
title | Foundations for Transfer in Reinforcement Learning: A Taxonomy of Knowledge Modalities |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-29T05%3A12%3A47IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Foundations%20for%20Transfer%20in%20Reinforcement%20Learning:%20A%20Taxonomy%20of%20Knowledge%20Modalities&rft.au=Wulfmeier,%20Markus&rft.date=2023-12-04&rft_id=info:doi/10.48550/arxiv.2312.01939&rft_dat=%3Carxiv_GOX%3E2312_01939%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |