CoRL: Environment Creation and Management Focused on System Integration

Existing reinforcement learning environment libraries use monolithic environment classes, provide shallow methods for altering agent observation and action spaces, and/or are tied to a specific simulation environment. The Core Reinforcement Learning library (CoRL) is a modular, composable, and hyper...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Merrick, Justin D, Heiner, Benjamin K, Long, Cameron, Stieber, Brian, Fierro, Steve, Gangal, Vardaan, Blake, Madison, Blackburn, Joshua
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Merrick, Justin D
Heiner, Benjamin K
Long, Cameron
Stieber, Brian
Fierro, Steve
Gangal, Vardaan
Blake, Madison
Blackburn, Joshua
description Existing reinforcement learning environment libraries use monolithic environment classes, provide shallow methods for altering agent observation and action spaces, and/or are tied to a specific simulation environment. The Core Reinforcement Learning library (CoRL) is a modular, composable, and hyper-configurable environment creation tool. It allows minute control over agent observations, rewards, and done conditions through the use of easy-to-read configuration files, pydantic validators, and a functor design pattern. Using integration pathways allows agents to be quickly implemented in new simulation environments, encourages rapid exploration, and enables transition of knowledge from low-fidelity to high-fidelity simulations. Natively multi-agent design and integration with Ray/RLLib (Liang et al., 2018) at release allow for easy scalability of agent complexity and computing power. The code is publicly released and available at https://github.com/act3-ace/CoRL.
doi_str_mv 10.48550/arxiv.2303.02182
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2303_02182</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2303_02182</sourcerecordid><originalsourceid>FETCH-LOGICAL-a672-ae8e8e42fdc780bfba48e42cbf8f5bd0bbf94dab3344c570595b2e75007439d83</originalsourceid><addsrcrecordid>eNotj81qwzAQhHXJoSR5gJ6iF7C70U8k91ZMkgZcCm3uZmWtgqGWi-yG5O2buGUOw8wsCx9jj2vIldUanjBd2nMuJMgcxNqKB7Yv-4_qmW_juU197CiOvEyEY9tHjtHzN4x4oqnf9c3PQJ7fls_rMFLHD3GkU5qOF2wW8Gug5b_P2XG3PZavWfW-P5QvVYYbIzIke5MSwTfGggsO1T02LtignQfnQqE8OimVarQBXWgnyGgAo2ThrZyz1d_biaT-Tm2H6VrfieqJSP4CWKpGtQ</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>CoRL: Environment Creation and Management Focused on System Integration</title><source>arXiv.org</source><creator>Merrick, Justin D ; Heiner, Benjamin K ; Long, Cameron ; Stieber, Brian ; Fierro, Steve ; Gangal, Vardaan ; Blake, Madison ; Blackburn, Joshua</creator><creatorcontrib>Merrick, Justin D ; Heiner, Benjamin K ; Long, Cameron ; Stieber, Brian ; Fierro, Steve ; Gangal, Vardaan ; Blake, Madison ; Blackburn, Joshua</creatorcontrib><description>Existing reinforcement learning environment libraries use monolithic environment classes, provide shallow methods for altering agent observation and action spaces, and/or are tied to a specific simulation environment. The Core Reinforcement Learning library (CoRL) is a modular, composable, and hyper-configurable environment creation tool. It allows minute control over agent observations, rewards, and done conditions through the use of easy-to-read configuration files, pydantic validators, and a functor design pattern. Using integration pathways allows agents to be quickly implemented in new simulation environments, encourages rapid exploration, and enables transition of knowledge from low-fidelity to high-fidelity simulations. Natively multi-agent design and integration with Ray/RLLib (Liang et al., 2018) at release allow for easy scalability of agent complexity and computing power. The code is publicly released and available at https://github.com/act3-ace/CoRL.</description><identifier>DOI: 10.48550/arxiv.2303.02182</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Learning</subject><creationdate>2023-03</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2303.02182$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2303.02182$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Merrick, Justin D</creatorcontrib><creatorcontrib>Heiner, Benjamin K</creatorcontrib><creatorcontrib>Long, Cameron</creatorcontrib><creatorcontrib>Stieber, Brian</creatorcontrib><creatorcontrib>Fierro, Steve</creatorcontrib><creatorcontrib>Gangal, Vardaan</creatorcontrib><creatorcontrib>Blake, Madison</creatorcontrib><creatorcontrib>Blackburn, Joshua</creatorcontrib><title>CoRL: Environment Creation and Management Focused on System Integration</title><description>Existing reinforcement learning environment libraries use monolithic environment classes, provide shallow methods for altering agent observation and action spaces, and/or are tied to a specific simulation environment. The Core Reinforcement Learning library (CoRL) is a modular, composable, and hyper-configurable environment creation tool. It allows minute control over agent observations, rewards, and done conditions through the use of easy-to-read configuration files, pydantic validators, and a functor design pattern. Using integration pathways allows agents to be quickly implemented in new simulation environments, encourages rapid exploration, and enables transition of knowledge from low-fidelity to high-fidelity simulations. Natively multi-agent design and integration with Ray/RLLib (Liang et al., 2018) at release allow for easy scalability of agent complexity and computing power. The code is publicly released and available at https://github.com/act3-ace/CoRL.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj81qwzAQhHXJoSR5gJ6iF7C70U8k91ZMkgZcCm3uZmWtgqGWi-yG5O2buGUOw8wsCx9jj2vIldUanjBd2nMuJMgcxNqKB7Yv-4_qmW_juU197CiOvEyEY9tHjtHzN4x4oqnf9c3PQJ7fls_rMFLHD3GkU5qOF2wW8Gug5b_P2XG3PZavWfW-P5QvVYYbIzIke5MSwTfGggsO1T02LtignQfnQqE8OimVarQBXWgnyGgAo2ThrZyz1d_biaT-Tm2H6VrfieqJSP4CWKpGtQ</recordid><startdate>20230303</startdate><enddate>20230303</enddate><creator>Merrick, Justin D</creator><creator>Heiner, Benjamin K</creator><creator>Long, Cameron</creator><creator>Stieber, Brian</creator><creator>Fierro, Steve</creator><creator>Gangal, Vardaan</creator><creator>Blake, Madison</creator><creator>Blackburn, Joshua</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20230303</creationdate><title>CoRL: Environment Creation and Management Focused on System Integration</title><author>Merrick, Justin D ; Heiner, Benjamin K ; Long, Cameron ; Stieber, Brian ; Fierro, Steve ; Gangal, Vardaan ; Blake, Madison ; Blackburn, Joshua</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a672-ae8e8e42fdc780bfba48e42cbf8f5bd0bbf94dab3344c570595b2e75007439d83</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Merrick, Justin D</creatorcontrib><creatorcontrib>Heiner, Benjamin K</creatorcontrib><creatorcontrib>Long, Cameron</creatorcontrib><creatorcontrib>Stieber, Brian</creatorcontrib><creatorcontrib>Fierro, Steve</creatorcontrib><creatorcontrib>Gangal, Vardaan</creatorcontrib><creatorcontrib>Blake, Madison</creatorcontrib><creatorcontrib>Blackburn, Joshua</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Merrick, Justin D</au><au>Heiner, Benjamin K</au><au>Long, Cameron</au><au>Stieber, Brian</au><au>Fierro, Steve</au><au>Gangal, Vardaan</au><au>Blake, Madison</au><au>Blackburn, Joshua</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>CoRL: Environment Creation and Management Focused on System Integration</atitle><date>2023-03-03</date><risdate>2023</risdate><abstract>Existing reinforcement learning environment libraries use monolithic environment classes, provide shallow methods for altering agent observation and action spaces, and/or are tied to a specific simulation environment. The Core Reinforcement Learning library (CoRL) is a modular, composable, and hyper-configurable environment creation tool. It allows minute control over agent observations, rewards, and done conditions through the use of easy-to-read configuration files, pydantic validators, and a functor design pattern. Using integration pathways allows agents to be quickly implemented in new simulation environments, encourages rapid exploration, and enables transition of knowledge from low-fidelity to high-fidelity simulations. Natively multi-agent design and integration with Ray/RLLib (Liang et al., 2018) at release allow for easy scalability of agent complexity and computing power. The code is publicly released and available at https://github.com/act3-ace/CoRL.</abstract><doi>10.48550/arxiv.2303.02182</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2303.02182
ispartof
issn
language eng
recordid cdi_arxiv_primary_2303_02182
source arXiv.org
subjects Computer Science - Artificial Intelligence
Computer Science - Learning
title CoRL: Environment Creation and Management Focused on System Integration
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-23T01%3A49%3A14IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=CoRL:%20Environment%20Creation%20and%20Management%20Focused%20on%20System%20Integration&rft.au=Merrick,%20Justin%20D&rft.date=2023-03-03&rft_id=info:doi/10.48550/arxiv.2303.02182&rft_dat=%3Carxiv_GOX%3E2303_02182%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true