An Actor-Critic reinforcement learning algorithm based on adaptive RBF network
We introduce an algorithm of actor-critic reinforcement learning methods in continuous state space. In order to cope with large-scale or continuous state spaces, the algorithm utilizes applied radial basis function (RBF) neural network to approximate the state value function. By training self-adapte...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Tagungsbericht |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 988 |
---|---|
container_issue | |
container_start_page | 984 |
container_title | |
container_volume | 2 |
creator | Chun-Gui Li Meng Wang Zhen-Jin Huang Zeng-Fang Zhang |
description | We introduce an algorithm of actor-critic reinforcement learning methods in continuous state space. In order to cope with large-scale or continuous state spaces, the algorithm utilizes applied radial basis function (RBF) neural network to approximate the state value function. By training self-adapted non-linear processing unit, realizing online adaptive reconstructing of state space, the approximation is improved. In order to improve the efficient of exploration, a hybrid exploration strategy is proposed. Experimental studies concerning a mountain-car control task illustrate the performance and applicability of the proposed algorithm. |
doi_str_mv | 10.1109/ICMLC.2009.5212431 |
format | Conference Proceeding |
fullrecord | <record><control><sourceid>ieee_6IE</sourceid><recordid>TN_cdi_ieee_primary_5212431</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>5212431</ieee_id><sourcerecordid>5212431</sourcerecordid><originalsourceid>FETCH-LOGICAL-i90t-ca2331561f3f171892694514e3308c8c5455041f370a2239828582395c1b2c863</originalsourceid><addsrcrecordid>eNo1UEtOwzAUNIJKtCUXgI0vkOL3np3YyxJRqFRAQl2wq1zXKYbGqZwIxO2JRJnNaDSfxTB2DWIGIMztsnpaVTMUwswUAkqCMzYBiVJSKQjPWWZK_a-RLtgYoRA5EL2N2GToaQNgFF6yrOs-xACpsCxozJ7nkc9d36a8SqEPjicfYt0m5xsfe37wNsUQ99we9u0QeG_41nZ-x9vI7c4e-_Dl-evdgkfff7fp84qNanvofHbiKVsv7tfVY756eVhW81UejOhzZ5EIVAE11VCCNlgYqUB6IqGddkoqJeTglsIiktGolR5YOdii0wVN2c3fbPDeb44pNDb9bE7X0C-hO1Dp</addsrcrecordid><sourcetype>Publisher</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype></control><display><type>conference_proceeding</type><title>An Actor-Critic reinforcement learning algorithm based on adaptive RBF network</title><source>IEEE Electronic Library (IEL) Conference Proceedings</source><creator>Chun-Gui Li ; Meng Wang ; Zhen-Jin Huang ; Zeng-Fang Zhang</creator><creatorcontrib>Chun-Gui Li ; Meng Wang ; Zhen-Jin Huang ; Zeng-Fang Zhang</creatorcontrib><description>We introduce an algorithm of actor-critic reinforcement learning methods in continuous state space. In order to cope with large-scale or continuous state spaces, the algorithm utilizes applied radial basis function (RBF) neural network to approximate the state value function. By training self-adapted non-linear processing unit, realizing online adaptive reconstructing of state space, the approximation is improved. In order to improve the efficient of exploration, a hybrid exploration strategy is proposed. Experimental studies concerning a mountain-car control task illustrate the performance and applicability of the proposed algorithm.</description><identifier>ISSN: 2160-133X</identifier><identifier>ISBN: 9781424437023</identifier><identifier>ISBN: 1424437024</identifier><identifier>EISBN: 1424437032</identifier><identifier>EISBN: 9781424437030</identifier><identifier>DOI: 10.1109/ICMLC.2009.5212431</identifier><identifier>LCCN: 2008911952</identifier><language>eng</language><publisher>IEEE</publisher><subject>Actor-Critic reinforcement learning ; Adaptive RBF network ; Adaptive systems ; Computer networks ; Cybernetics ; Exploration strategy ; Function approximation ; Machine learning ; Machine learning algorithms ; Neural networks ; Radial basis function networks ; Space technology ; State-space methods</subject><ispartof>2009 International Conference on Machine Learning and Cybernetics, 2009, Vol.2, p.984-988</ispartof><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/5212431$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>309,310,776,780,785,786,2051,27904,54899</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/5212431$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Chun-Gui Li</creatorcontrib><creatorcontrib>Meng Wang</creatorcontrib><creatorcontrib>Zhen-Jin Huang</creatorcontrib><creatorcontrib>Zeng-Fang Zhang</creatorcontrib><title>An Actor-Critic reinforcement learning algorithm based on adaptive RBF network</title><title>2009 International Conference on Machine Learning and Cybernetics</title><addtitle>ICMLC</addtitle><description>We introduce an algorithm of actor-critic reinforcement learning methods in continuous state space. In order to cope with large-scale or continuous state spaces, the algorithm utilizes applied radial basis function (RBF) neural network to approximate the state value function. By training self-adapted non-linear processing unit, realizing online adaptive reconstructing of state space, the approximation is improved. In order to improve the efficient of exploration, a hybrid exploration strategy is proposed. Experimental studies concerning a mountain-car control task illustrate the performance and applicability of the proposed algorithm.</description><subject>Actor-Critic reinforcement learning</subject><subject>Adaptive RBF network</subject><subject>Adaptive systems</subject><subject>Computer networks</subject><subject>Cybernetics</subject><subject>Exploration strategy</subject><subject>Function approximation</subject><subject>Machine learning</subject><subject>Machine learning algorithms</subject><subject>Neural networks</subject><subject>Radial basis function networks</subject><subject>Space technology</subject><subject>State-space methods</subject><issn>2160-133X</issn><isbn>9781424437023</isbn><isbn>1424437024</isbn><isbn>1424437032</isbn><isbn>9781424437030</isbn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>2009</creationdate><recordtype>conference_proceeding</recordtype><sourceid>6IE</sourceid><sourceid>RIE</sourceid><recordid>eNo1UEtOwzAUNIJKtCUXgI0vkOL3np3YyxJRqFRAQl2wq1zXKYbGqZwIxO2JRJnNaDSfxTB2DWIGIMztsnpaVTMUwswUAkqCMzYBiVJSKQjPWWZK_a-RLtgYoRA5EL2N2GToaQNgFF6yrOs-xACpsCxozJ7nkc9d36a8SqEPjicfYt0m5xsfe37wNsUQ99we9u0QeG_41nZ-x9vI7c4e-_Dl-evdgkfff7fp84qNanvofHbiKVsv7tfVY756eVhW81UejOhzZ5EIVAE11VCCNlgYqUB6IqGddkoqJeTglsIiktGolR5YOdii0wVN2c3fbPDeb44pNDb9bE7X0C-hO1Dp</recordid><startdate>200907</startdate><enddate>200907</enddate><creator>Chun-Gui Li</creator><creator>Meng Wang</creator><creator>Zhen-Jin Huang</creator><creator>Zeng-Fang Zhang</creator><general>IEEE</general><scope>6IE</scope><scope>6IL</scope><scope>CBEJK</scope><scope>RIE</scope><scope>RIL</scope></search><sort><creationdate>200907</creationdate><title>An Actor-Critic reinforcement learning algorithm based on adaptive RBF network</title><author>Chun-Gui Li ; Meng Wang ; Zhen-Jin Huang ; Zeng-Fang Zhang</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-i90t-ca2331561f3f171892694514e3308c8c5455041f370a2239828582395c1b2c863</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>2009</creationdate><topic>Actor-Critic reinforcement learning</topic><topic>Adaptive RBF network</topic><topic>Adaptive systems</topic><topic>Computer networks</topic><topic>Cybernetics</topic><topic>Exploration strategy</topic><topic>Function approximation</topic><topic>Machine learning</topic><topic>Machine learning algorithms</topic><topic>Neural networks</topic><topic>Radial basis function networks</topic><topic>Space technology</topic><topic>State-space methods</topic><toplevel>online_resources</toplevel><creatorcontrib>Chun-Gui Li</creatorcontrib><creatorcontrib>Meng Wang</creatorcontrib><creatorcontrib>Zhen-Jin Huang</creatorcontrib><creatorcontrib>Zeng-Fang Zhang</creatorcontrib><collection>IEEE Electronic Library (IEL) Conference Proceedings</collection><collection>IEEE Proceedings Order Plan All Online (POP All Online) 1998-present by volume</collection><collection>IEEE Xplore All Conference Proceedings</collection><collection>IEEE Electronic Library (IEL)</collection><collection>IEEE Proceedings Order Plans (POP All) 1998-Present</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Chun-Gui Li</au><au>Meng Wang</au><au>Zhen-Jin Huang</au><au>Zeng-Fang Zhang</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>An Actor-Critic reinforcement learning algorithm based on adaptive RBF network</atitle><btitle>2009 International Conference on Machine Learning and Cybernetics</btitle><stitle>ICMLC</stitle><date>2009-07</date><risdate>2009</risdate><volume>2</volume><spage>984</spage><epage>988</epage><pages>984-988</pages><issn>2160-133X</issn><isbn>9781424437023</isbn><isbn>1424437024</isbn><eisbn>1424437032</eisbn><eisbn>9781424437030</eisbn><abstract>We introduce an algorithm of actor-critic reinforcement learning methods in continuous state space. In order to cope with large-scale or continuous state spaces, the algorithm utilizes applied radial basis function (RBF) neural network to approximate the state value function. By training self-adapted non-linear processing unit, realizing online adaptive reconstructing of state space, the approximation is improved. In order to improve the efficient of exploration, a hybrid exploration strategy is proposed. Experimental studies concerning a mountain-car control task illustrate the performance and applicability of the proposed algorithm.</abstract><pub>IEEE</pub><doi>10.1109/ICMLC.2009.5212431</doi><tpages>5</tpages></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | ISSN: 2160-133X |
ispartof | 2009 International Conference on Machine Learning and Cybernetics, 2009, Vol.2, p.984-988 |
issn | 2160-133X |
language | eng |
recordid | cdi_ieee_primary_5212431 |
source | IEEE Electronic Library (IEL) Conference Proceedings |
subjects | Actor-Critic reinforcement learning Adaptive RBF network Adaptive systems Computer networks Cybernetics Exploration strategy Function approximation Machine learning Machine learning algorithms Neural networks Radial basis function networks Space technology State-space methods |
title | An Actor-Critic reinforcement learning algorithm based on adaptive RBF network |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-23T01%3A47%3A02IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-ieee_6IE&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=An%20Actor-Critic%20reinforcement%20learning%20algorithm%20based%20on%20adaptive%20RBF%20network&rft.btitle=2009%20International%20Conference%20on%20Machine%20Learning%20and%20Cybernetics&rft.au=Chun-Gui%20Li&rft.date=2009-07&rft.volume=2&rft.spage=984&rft.epage=988&rft.pages=984-988&rft.issn=2160-133X&rft.isbn=9781424437023&rft.isbn_list=1424437024&rft_id=info:doi/10.1109/ICMLC.2009.5212431&rft_dat=%3Cieee_6IE%3E5212431%3C/ieee_6IE%3E%3Curl%3E%3C/url%3E&rft.eisbn=1424437032&rft.eisbn_list=9781424437030&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=5212431&rfr_iscdi=true |