Evolutionary Reinforcement Learning: A Systematic Review and Future Directions

In response to the limitations of reinforcement learning and evolutionary algorithms (EAs) in complex problem-solving, Evolutionary Reinforcement Learning (EvoRL) has emerged as a synergistic solution. EvoRL integrates EAs and reinforcement learning, presenting a promising avenue for training intell...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Lin, Yuanguo, Lin, Fan, Cai, Guorong, Chen, Hong, Zou, Lixin, Wu, Pengcheng
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Lin, Yuanguo
Lin, Fan
Cai, Guorong
Chen, Hong
Zou, Lixin
Wu, Pengcheng
description In response to the limitations of reinforcement learning and evolutionary algorithms (EAs) in complex problem-solving, Evolutionary Reinforcement Learning (EvoRL) has emerged as a synergistic solution. EvoRL integrates EAs and reinforcement learning, presenting a promising avenue for training intelligent agents. This systematic review firstly navigates through the technological background of EvoRL, examining the symbiotic relationship between EAs and reinforcement learning algorithms. We then delve into the challenges faced by both EAs and reinforcement learning, exploring their interplay and impact on the efficacy of EvoRL. Furthermore, the review underscores the need for addressing open issues related to scalability, adaptability, sample efficiency, adversarial robustness, ethic and fairness within the current landscape of EvoRL. Finally, we propose future directions for EvoRL, emphasizing research avenues that strive to enhance self-adaptation and self-improvement, generalization, interpretability, explainability, and so on. Serving as a comprehensive resource for researchers and practitioners, this systematic review provides insights into the current state of EvoRL and offers a guide for advancing its capabilities in the ever-evolving landscape of artificial intelligence.
doi_str_mv 10.48550/arxiv.2402.13296
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2402_13296</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2402_13296</sourcerecordid><originalsourceid>FETCH-LOGICAL-a676-c5771e96bae901360c4bbcf7bdd540477e1af49bbb1fc383811106a84939b6bd3</originalsourceid><addsrcrecordid>eNotj71OwzAURr0woMIDMOEXSLBjx47ZqtICUgQS7R5dO9eVpcZBzg_07WlLp2_5dHQOIQ-c5bIqS_YE6TfMeSFZkXNRGHVLPtZzf5jG0EdIR_qFIfo-OewwjrRGSDHE_TNd0u1xGLGDMbjTaQ74QyG2dDONU0L6EhK6M2O4IzceDgPeX3dBdpv1bvWW1Z-v76tlnYHSKnOl1hyNsoCGcaGYk9Y6r23blpJJrZGDl8Zay70Tlag450xBJY0wVtlWLMjjP_YS1Hyn0J30m3NYcwkTf_6YSWU</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Evolutionary Reinforcement Learning: A Systematic Review and Future Directions</title><source>arXiv.org</source><creator>Lin, Yuanguo ; Lin, Fan ; Cai, Guorong ; Chen, Hong ; Zou, Lixin ; Wu, Pengcheng</creator><creatorcontrib>Lin, Yuanguo ; Lin, Fan ; Cai, Guorong ; Chen, Hong ; Zou, Lixin ; Wu, Pengcheng</creatorcontrib><description>In response to the limitations of reinforcement learning and evolutionary algorithms (EAs) in complex problem-solving, Evolutionary Reinforcement Learning (EvoRL) has emerged as a synergistic solution. EvoRL integrates EAs and reinforcement learning, presenting a promising avenue for training intelligent agents. This systematic review firstly navigates through the technological background of EvoRL, examining the symbiotic relationship between EAs and reinforcement learning algorithms. We then delve into the challenges faced by both EAs and reinforcement learning, exploring their interplay and impact on the efficacy of EvoRL. Furthermore, the review underscores the need for addressing open issues related to scalability, adaptability, sample efficiency, adversarial robustness, ethic and fairness within the current landscape of EvoRL. Finally, we propose future directions for EvoRL, emphasizing research avenues that strive to enhance self-adaptation and self-improvement, generalization, interpretability, explainability, and so on. Serving as a comprehensive resource for researchers and practitioners, this systematic review provides insights into the current state of EvoRL and offers a guide for advancing its capabilities in the ever-evolving landscape of artificial intelligence.</description><identifier>DOI: 10.48550/arxiv.2402.13296</identifier><language>eng</language><subject>Computer Science - Neural and Evolutionary Computing</subject><creationdate>2024-02</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,777,882</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2402.13296$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2402.13296$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Lin, Yuanguo</creatorcontrib><creatorcontrib>Lin, Fan</creatorcontrib><creatorcontrib>Cai, Guorong</creatorcontrib><creatorcontrib>Chen, Hong</creatorcontrib><creatorcontrib>Zou, Lixin</creatorcontrib><creatorcontrib>Wu, Pengcheng</creatorcontrib><title>Evolutionary Reinforcement Learning: A Systematic Review and Future Directions</title><description>In response to the limitations of reinforcement learning and evolutionary algorithms (EAs) in complex problem-solving, Evolutionary Reinforcement Learning (EvoRL) has emerged as a synergistic solution. EvoRL integrates EAs and reinforcement learning, presenting a promising avenue for training intelligent agents. This systematic review firstly navigates through the technological background of EvoRL, examining the symbiotic relationship between EAs and reinforcement learning algorithms. We then delve into the challenges faced by both EAs and reinforcement learning, exploring their interplay and impact on the efficacy of EvoRL. Furthermore, the review underscores the need for addressing open issues related to scalability, adaptability, sample efficiency, adversarial robustness, ethic and fairness within the current landscape of EvoRL. Finally, we propose future directions for EvoRL, emphasizing research avenues that strive to enhance self-adaptation and self-improvement, generalization, interpretability, explainability, and so on. Serving as a comprehensive resource for researchers and practitioners, this systematic review provides insights into the current state of EvoRL and offers a guide for advancing its capabilities in the ever-evolving landscape of artificial intelligence.</description><subject>Computer Science - Neural and Evolutionary Computing</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj71OwzAURr0woMIDMOEXSLBjx47ZqtICUgQS7R5dO9eVpcZBzg_07WlLp2_5dHQOIQ-c5bIqS_YE6TfMeSFZkXNRGHVLPtZzf5jG0EdIR_qFIfo-OewwjrRGSDHE_TNd0u1xGLGDMbjTaQ74QyG2dDONU0L6EhK6M2O4IzceDgPeX3dBdpv1bvWW1Z-v76tlnYHSKnOl1hyNsoCGcaGYk9Y6r23blpJJrZGDl8Zay70Tlag450xBJY0wVtlWLMjjP_YS1Hyn0J30m3NYcwkTf_6YSWU</recordid><startdate>20240219</startdate><enddate>20240219</enddate><creator>Lin, Yuanguo</creator><creator>Lin, Fan</creator><creator>Cai, Guorong</creator><creator>Chen, Hong</creator><creator>Zou, Lixin</creator><creator>Wu, Pengcheng</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240219</creationdate><title>Evolutionary Reinforcement Learning: A Systematic Review and Future Directions</title><author>Lin, Yuanguo ; Lin, Fan ; Cai, Guorong ; Chen, Hong ; Zou, Lixin ; Wu, Pengcheng</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a676-c5771e96bae901360c4bbcf7bdd540477e1af49bbb1fc383811106a84939b6bd3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Neural and Evolutionary Computing</topic><toplevel>online_resources</toplevel><creatorcontrib>Lin, Yuanguo</creatorcontrib><creatorcontrib>Lin, Fan</creatorcontrib><creatorcontrib>Cai, Guorong</creatorcontrib><creatorcontrib>Chen, Hong</creatorcontrib><creatorcontrib>Zou, Lixin</creatorcontrib><creatorcontrib>Wu, Pengcheng</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Lin, Yuanguo</au><au>Lin, Fan</au><au>Cai, Guorong</au><au>Chen, Hong</au><au>Zou, Lixin</au><au>Wu, Pengcheng</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Evolutionary Reinforcement Learning: A Systematic Review and Future Directions</atitle><date>2024-02-19</date><risdate>2024</risdate><abstract>In response to the limitations of reinforcement learning and evolutionary algorithms (EAs) in complex problem-solving, Evolutionary Reinforcement Learning (EvoRL) has emerged as a synergistic solution. EvoRL integrates EAs and reinforcement learning, presenting a promising avenue for training intelligent agents. This systematic review firstly navigates through the technological background of EvoRL, examining the symbiotic relationship between EAs and reinforcement learning algorithms. We then delve into the challenges faced by both EAs and reinforcement learning, exploring their interplay and impact on the efficacy of EvoRL. Furthermore, the review underscores the need for addressing open issues related to scalability, adaptability, sample efficiency, adversarial robustness, ethic and fairness within the current landscape of EvoRL. Finally, we propose future directions for EvoRL, emphasizing research avenues that strive to enhance self-adaptation and self-improvement, generalization, interpretability, explainability, and so on. Serving as a comprehensive resource for researchers and practitioners, this systematic review provides insights into the current state of EvoRL and offers a guide for advancing its capabilities in the ever-evolving landscape of artificial intelligence.</abstract><doi>10.48550/arxiv.2402.13296</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2402.13296
ispartof
issn
language eng
recordid cdi_arxiv_primary_2402_13296
source arXiv.org
subjects Computer Science - Neural and Evolutionary Computing
title Evolutionary Reinforcement Learning: A Systematic Review and Future Directions
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-20T23%3A34%3A58IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Evolutionary%20Reinforcement%20Learning:%20A%20Systematic%20Review%20and%20Future%20Directions&rft.au=Lin,%20Yuanguo&rft.date=2024-02-19&rft_id=info:doi/10.48550/arxiv.2402.13296&rft_dat=%3Carxiv_GOX%3E2402_13296%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true