The Arcade Learning Environment: An Evaluation Platform for General Agents
In this article we introduce the Arcade Learning Environment (ALE): both a challenge problem and a platform and methodology for evaluating the development of general, domain-independent AI technology. ALE provides an interface to hundreds of Atari 2600 game environments, each one different, interest...
Gespeichert in:
Veröffentlicht in: | The Journal of artificial intelligence research 2013-01, Vol.47, p.253-279 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 279 |
---|---|
container_issue | |
container_start_page | 253 |
container_title | The Journal of artificial intelligence research |
container_volume | 47 |
creator | Bellemare, M. G. Naddaf, Y. Veness, J. Bowling, M. |
description | In this article we introduce the Arcade Learning Environment (ALE): both a challenge problem and a platform and methodology for evaluating the development of general, domain-independent AI technology. ALE provides an interface to hundreds of Atari 2600 game environments, each one different, interesting, and designed to be a challenge for human players. ALE presents significant research challenges for reinforcement learning, model learning, model-based planning, imitation learning, transfer learning, and intrinsic motivation. Most importantly, it provides a rigorous testbed for evaluating and comparing approaches to these problems. We illustrate the promise of ALE by developing and benchmarking domain-independent agents designed using well-established AI techniques for both reinforcement learning and planning. In doing so, we also propose an evaluation methodology made possible by ALE, reporting empirical results on over 55 different games. All of the software, including the benchmark agents, is publicly available. |
doi_str_mv | 10.1613/jair.3912 |
format | Article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2554101099</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2554101099</sourcerecordid><originalsourceid>FETCH-LOGICAL-c323t-553297aba6dba957b16ba4f41696448f495d21bddce94b8a88fbf52e9df2a8303</originalsourceid><addsrcrecordid>eNpNkD1PwzAURS0EEqUw8A8sMTGk-DOJ2aKqFFAkGMpsPSd2SZQ6xU4q8e-bqgws777h6F7pIHRPyYKmlD-10IQFV5RdoBklWZqoTGaX__5rdBNjSwhVguUz9L75trgIFdQWlxaCb_wWr_yhCb3fWT8848Lj1QG6EYam9_izg8H1YYeng9fW2wAdLrYTGW_RlYMu2ru_nKOvl9Vm-ZqUH-u3ZVEmFWd8SKTkTGVgIK0NKJkZmhoQTtBUpULkTihZM2rqurJKmBzy3BknmVW1Y5Bzwufo4dy7D_3PaOOg234MfprUTEpBCSVKTdTjmapCH2OwTu9Ds4PwqynRJ1X6pEqfVPEj0p5bxA</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2554101099</pqid></control><display><type>article</type><title>The Arcade Learning Environment: An Evaluation Platform for General Agents</title><source>DOAJ Directory of Open Access Journals</source><source>Elektronische Zeitschriftenbibliothek - Frei zugängliche E-Journals</source><source>Free E- Journals</source><creator>Bellemare, M. G. ; Naddaf, Y. ; Veness, J. ; Bowling, M.</creator><creatorcontrib>Bellemare, M. G. ; Naddaf, Y. ; Veness, J. ; Bowling, M.</creatorcontrib><description>In this article we introduce the Arcade Learning Environment (ALE): both a challenge problem and a platform and methodology for evaluating the development of general, domain-independent AI technology. ALE provides an interface to hundreds of Atari 2600 game environments, each one different, interesting, and designed to be a challenge for human players. ALE presents significant research challenges for reinforcement learning, model learning, model-based planning, imitation learning, transfer learning, and intrinsic motivation. Most importantly, it provides a rigorous testbed for evaluating and comparing approaches to these problems. We illustrate the promise of ALE by developing and benchmarking domain-independent agents designed using well-established AI techniques for both reinforcement learning and planning. In doing so, we also propose an evaluation methodology made possible by ALE, reporting empirical results on over 55 different games. All of the software, including the benchmark agents, is publicly available.</description><identifier>ISSN: 1076-9757</identifier><identifier>EISSN: 1076-9757</identifier><identifier>EISSN: 1943-5037</identifier><identifier>DOI: 10.1613/jair.3912</identifier><language>eng</language><publisher>San Francisco: AI Access Foundation</publisher><subject>Artificial intelligence ; Domains ; Machine learning</subject><ispartof>The Journal of artificial intelligence research, 2013-01, Vol.47, p.253-279</ispartof><rights>2013. Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the associated terms available at https://www.jair.org/index.php/jair/about</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c323t-553297aba6dba957b16ba4f41696448f495d21bddce94b8a88fbf52e9df2a8303</citedby></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,780,784,864,27924,27925</link.rule.ids></links><search><creatorcontrib>Bellemare, M. G.</creatorcontrib><creatorcontrib>Naddaf, Y.</creatorcontrib><creatorcontrib>Veness, J.</creatorcontrib><creatorcontrib>Bowling, M.</creatorcontrib><title>The Arcade Learning Environment: An Evaluation Platform for General Agents</title><title>The Journal of artificial intelligence research</title><description>In this article we introduce the Arcade Learning Environment (ALE): both a challenge problem and a platform and methodology for evaluating the development of general, domain-independent AI technology. ALE provides an interface to hundreds of Atari 2600 game environments, each one different, interesting, and designed to be a challenge for human players. ALE presents significant research challenges for reinforcement learning, model learning, model-based planning, imitation learning, transfer learning, and intrinsic motivation. Most importantly, it provides a rigorous testbed for evaluating and comparing approaches to these problems. We illustrate the promise of ALE by developing and benchmarking domain-independent agents designed using well-established AI techniques for both reinforcement learning and planning. In doing so, we also propose an evaluation methodology made possible by ALE, reporting empirical results on over 55 different games. All of the software, including the benchmark agents, is publicly available.</description><subject>Artificial intelligence</subject><subject>Domains</subject><subject>Machine learning</subject><issn>1076-9757</issn><issn>1076-9757</issn><issn>1943-5037</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2013</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GNUQQ</sourceid><recordid>eNpNkD1PwzAURS0EEqUw8A8sMTGk-DOJ2aKqFFAkGMpsPSd2SZQ6xU4q8e-bqgws777h6F7pIHRPyYKmlD-10IQFV5RdoBklWZqoTGaX__5rdBNjSwhVguUz9L75trgIFdQWlxaCb_wWr_yhCb3fWT8848Lj1QG6EYam9_izg8H1YYeng9fW2wAdLrYTGW_RlYMu2ru_nKOvl9Vm-ZqUH-u3ZVEmFWd8SKTkTGVgIK0NKJkZmhoQTtBUpULkTihZM2rqurJKmBzy3BknmVW1Y5Bzwufo4dy7D_3PaOOg234MfprUTEpBCSVKTdTjmapCH2OwTu9Ds4PwqynRJ1X6pEqfVPEj0p5bxA</recordid><startdate>20130101</startdate><enddate>20130101</enddate><creator>Bellemare, M. G.</creator><creator>Naddaf, Y.</creator><creator>Veness, J.</creator><creator>Bowling, M.</creator><general>AI Access Foundation</general><scope>AAYXX</scope><scope>CITATION</scope><scope>8FE</scope><scope>8FG</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>GNUQQ</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K7-</scope><scope>P62</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope></search><sort><creationdate>20130101</creationdate><title>The Arcade Learning Environment: An Evaluation Platform for General Agents</title><author>Bellemare, M. G. ; Naddaf, Y. ; Veness, J. ; Bowling, M.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c323t-553297aba6dba957b16ba4f41696448f495d21bddce94b8a88fbf52e9df2a8303</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2013</creationdate><topic>Artificial intelligence</topic><topic>Domains</topic><topic>Machine learning</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Bellemare, M. G.</creatorcontrib><creatorcontrib>Naddaf, Y.</creatorcontrib><creatorcontrib>Veness, J.</creatorcontrib><creatorcontrib>Bowling, M.</creatorcontrib><collection>CrossRef</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies & Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>ProQuest Central Student</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Computer Science Collection</collection><collection>Computer Science Database</collection><collection>ProQuest Advanced Technologies & Aerospace Collection</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><jtitle>The Journal of artificial intelligence research</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Bellemare, M. G.</au><au>Naddaf, Y.</au><au>Veness, J.</au><au>Bowling, M.</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>The Arcade Learning Environment: An Evaluation Platform for General Agents</atitle><jtitle>The Journal of artificial intelligence research</jtitle><date>2013-01-01</date><risdate>2013</risdate><volume>47</volume><spage>253</spage><epage>279</epage><pages>253-279</pages><issn>1076-9757</issn><eissn>1076-9757</eissn><eissn>1943-5037</eissn><abstract>In this article we introduce the Arcade Learning Environment (ALE): both a challenge problem and a platform and methodology for evaluating the development of general, domain-independent AI technology. ALE provides an interface to hundreds of Atari 2600 game environments, each one different, interesting, and designed to be a challenge for human players. ALE presents significant research challenges for reinforcement learning, model learning, model-based planning, imitation learning, transfer learning, and intrinsic motivation. Most importantly, it provides a rigorous testbed for evaluating and comparing approaches to these problems. We illustrate the promise of ALE by developing and benchmarking domain-independent agents designed using well-established AI techniques for both reinforcement learning and planning. In doing so, we also propose an evaluation methodology made possible by ALE, reporting empirical results on over 55 different games. All of the software, including the benchmark agents, is publicly available.</abstract><cop>San Francisco</cop><pub>AI Access Foundation</pub><doi>10.1613/jair.3912</doi><tpages>27</tpages><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 1076-9757 |
ispartof | The Journal of artificial intelligence research, 2013-01, Vol.47, p.253-279 |
issn | 1076-9757 1076-9757 1943-5037 |
language | eng |
recordid | cdi_proquest_journals_2554101099 |
source | DOAJ Directory of Open Access Journals; Elektronische Zeitschriftenbibliothek - Frei zugängliche E-Journals; Free E- Journals |
subjects | Artificial intelligence Domains Machine learning |
title | The Arcade Learning Environment: An Evaluation Platform for General Agents |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-21T08%3A14%3A16IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=The%20Arcade%20Learning%20Environment:%20An%20Evaluation%20Platform%20for%20General%20Agents&rft.jtitle=The%20Journal%20of%20artificial%20intelligence%20research&rft.au=Bellemare,%20M.%20G.&rft.date=2013-01-01&rft.volume=47&rft.spage=253&rft.epage=279&rft.pages=253-279&rft.issn=1076-9757&rft.eissn=1076-9757&rft_id=info:doi/10.1613/jair.3912&rft_dat=%3Cproquest_cross%3E2554101099%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2554101099&rft_id=info:pmid/&rfr_iscdi=true |