A Model Based Reinforcement Learning Approach Using On-Line Clustering
A significant issue in representing reinforcement learning agents in Markov decision processes is how to design efficient feature spaces in order to estimate optimal policy. This particular study addresses this challenge by proposing a compact framework that employs an on-line clustering approach fo...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Tagungsbericht |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 718 |
---|---|
container_issue | |
container_start_page | 712 |
container_title | |
container_volume | 1 |
creator | Tziortziotis, N. Blekas, K. |
description | A significant issue in representing reinforcement learning agents in Markov decision processes is how to design efficient feature spaces in order to estimate optimal policy. This particular study addresses this challenge by proposing a compact framework that employs an on-line clustering approach for constructing appropriate basis functions. Also, it performs a state-action trajectory analysis to gain valuable affinity information among clusters and estimate their transition dynamics. Value function approximation is used for policy evaluation in a least-squares temporal difference framework. The proposed method is evaluated in several simulated and real environments, where we took promising results. |
doi_str_mv | 10.1109/ICTAI.2012.101 |
format | Conference Proceeding |
fullrecord | <record><control><sourceid>ieee_6IE</sourceid><recordid>TN_cdi_ieee_primary_6495113</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>6495113</ieee_id><sourcerecordid>6495113</sourcerecordid><originalsourceid>FETCH-LOGICAL-i175t-15c7a947948bc1fbcc6e3885317c3af8120ab872f39457298e680c3275e0202c3</originalsourceid><addsrcrecordid>eNotjr1OwzAURs2fRFq6srD4BVLuvY5jewxRC5WCKqF2rhz3BoLSJErCwNtTBNOnc4ajT4h7hCUiuMdNvss2SwKkJQJeiBmY1OnEoaZLEZEyOgZ05krMMDHOAZFJr0WEYClWCbhbsRjHTwBAUBqsjsQ6k6_dkRv55Ec-yjeu26obAp-4nWTBfmjr9l1mfT90PnzI_fiL2zYu6pZl3nyNEw9ndSduKt-MvPjfudivV7v8JS62z5s8K-IajZ5i1MF4d76W2DJgVYaQsrJWKzRB-coigS-toUq5RBtyllMLQZHRDAQU1Fw8_HVrZj70Q33yw_chTZxGVOoH3LJNLg</addsrcrecordid><sourcetype>Publisher</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype></control><display><type>conference_proceeding</type><title>A Model Based Reinforcement Learning Approach Using On-Line Clustering</title><source>IEEE Electronic Library (IEL) Conference Proceedings</source><creator>Tziortziotis, N. ; Blekas, K.</creator><creatorcontrib>Tziortziotis, N. ; Blekas, K.</creatorcontrib><description>A significant issue in representing reinforcement learning agents in Markov decision processes is how to design efficient feature spaces in order to estimate optimal policy. This particular study addresses this challenge by proposing a compact framework that employs an on-line clustering approach for constructing appropriate basis functions. Also, it performs a state-action trajectory analysis to gain valuable affinity information among clusters and estimate their transition dynamics. Value function approximation is used for policy evaluation in a least-squares temporal difference framework. The proposed method is evaluated in several simulated and real environments, where we took promising results.</description><identifier>ISSN: 1082-3409</identifier><identifier>ISBN: 1479902276</identifier><identifier>ISBN: 9781479902279</identifier><identifier>EISSN: 2375-0197</identifier><identifier>EISBN: 0769549152</identifier><identifier>EISBN: 9780769549156</identifier><identifier>DOI: 10.1109/ICTAI.2012.101</identifier><identifier>CODEN: IEEPAD</identifier><language>eng</language><publisher>IEEE</publisher><subject>clustering ; Clustering algorithms ; Equations ; Function approximation ; Kernel ; Mathematical model ; mixture models ; model-based reinforcement learning ; on-line EM ; Robot kinematics</subject><ispartof>2012 IEEE 24th International Conference on Tools with Artificial Intelligence, 2012, Vol.1, p.712-718</ispartof><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/6495113$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>309,310,780,784,789,790,2057,27924,54919</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/6495113$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Tziortziotis, N.</creatorcontrib><creatorcontrib>Blekas, K.</creatorcontrib><title>A Model Based Reinforcement Learning Approach Using On-Line Clustering</title><title>2012 IEEE 24th International Conference on Tools with Artificial Intelligence</title><addtitle>TAI</addtitle><description>A significant issue in representing reinforcement learning agents in Markov decision processes is how to design efficient feature spaces in order to estimate optimal policy. This particular study addresses this challenge by proposing a compact framework that employs an on-line clustering approach for constructing appropriate basis functions. Also, it performs a state-action trajectory analysis to gain valuable affinity information among clusters and estimate their transition dynamics. Value function approximation is used for policy evaluation in a least-squares temporal difference framework. The proposed method is evaluated in several simulated and real environments, where we took promising results.</description><subject>clustering</subject><subject>Clustering algorithms</subject><subject>Equations</subject><subject>Function approximation</subject><subject>Kernel</subject><subject>Mathematical model</subject><subject>mixture models</subject><subject>model-based reinforcement learning</subject><subject>on-line EM</subject><subject>Robot kinematics</subject><issn>1082-3409</issn><issn>2375-0197</issn><isbn>1479902276</isbn><isbn>9781479902279</isbn><isbn>0769549152</isbn><isbn>9780769549156</isbn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>2012</creationdate><recordtype>conference_proceeding</recordtype><sourceid>6IE</sourceid><sourceid>RIE</sourceid><recordid>eNotjr1OwzAURs2fRFq6srD4BVLuvY5jewxRC5WCKqF2rhz3BoLSJErCwNtTBNOnc4ajT4h7hCUiuMdNvss2SwKkJQJeiBmY1OnEoaZLEZEyOgZ05krMMDHOAZFJr0WEYClWCbhbsRjHTwBAUBqsjsQ6k6_dkRv55Ec-yjeu26obAp-4nWTBfmjr9l1mfT90PnzI_fiL2zYu6pZl3nyNEw9ndSduKt-MvPjfudivV7v8JS62z5s8K-IajZ5i1MF4d76W2DJgVYaQsrJWKzRB-coigS-toUq5RBtyllMLQZHRDAQU1Fw8_HVrZj70Q33yw_chTZxGVOoH3LJNLg</recordid><startdate>201211</startdate><enddate>201211</enddate><creator>Tziortziotis, N.</creator><creator>Blekas, K.</creator><general>IEEE</general><scope>6IE</scope><scope>6IH</scope><scope>CBEJK</scope><scope>RIE</scope><scope>RIO</scope></search><sort><creationdate>201211</creationdate><title>A Model Based Reinforcement Learning Approach Using On-Line Clustering</title><author>Tziortziotis, N. ; Blekas, K.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-i175t-15c7a947948bc1fbcc6e3885317c3af8120ab872f39457298e680c3275e0202c3</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>2012</creationdate><topic>clustering</topic><topic>Clustering algorithms</topic><topic>Equations</topic><topic>Function approximation</topic><topic>Kernel</topic><topic>Mathematical model</topic><topic>mixture models</topic><topic>model-based reinforcement learning</topic><topic>on-line EM</topic><topic>Robot kinematics</topic><toplevel>online_resources</toplevel><creatorcontrib>Tziortziotis, N.</creatorcontrib><creatorcontrib>Blekas, K.</creatorcontrib><collection>IEEE Electronic Library (IEL) Conference Proceedings</collection><collection>IEEE Proceedings Order Plan (POP) 1998-present by volume</collection><collection>IEEE Xplore All Conference Proceedings</collection><collection>IEEE Electronic Library (IEL)</collection><collection>IEEE Proceedings Order Plans (POP) 1998-present</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Tziortziotis, N.</au><au>Blekas, K.</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>A Model Based Reinforcement Learning Approach Using On-Line Clustering</atitle><btitle>2012 IEEE 24th International Conference on Tools with Artificial Intelligence</btitle><stitle>TAI</stitle><date>2012-11</date><risdate>2012</risdate><volume>1</volume><spage>712</spage><epage>718</epage><pages>712-718</pages><issn>1082-3409</issn><eissn>2375-0197</eissn><isbn>1479902276</isbn><isbn>9781479902279</isbn><eisbn>0769549152</eisbn><eisbn>9780769549156</eisbn><coden>IEEPAD</coden><abstract>A significant issue in representing reinforcement learning agents in Markov decision processes is how to design efficient feature spaces in order to estimate optimal policy. This particular study addresses this challenge by proposing a compact framework that employs an on-line clustering approach for constructing appropriate basis functions. Also, it performs a state-action trajectory analysis to gain valuable affinity information among clusters and estimate their transition dynamics. Value function approximation is used for policy evaluation in a least-squares temporal difference framework. The proposed method is evaluated in several simulated and real environments, where we took promising results.</abstract><pub>IEEE</pub><doi>10.1109/ICTAI.2012.101</doi><tpages>7</tpages></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | ISSN: 1082-3409 |
ispartof | 2012 IEEE 24th International Conference on Tools with Artificial Intelligence, 2012, Vol.1, p.712-718 |
issn | 1082-3409 2375-0197 |
language | eng |
recordid | cdi_ieee_primary_6495113 |
source | IEEE Electronic Library (IEL) Conference Proceedings |
subjects | clustering Clustering algorithms Equations Function approximation Kernel Mathematical model mixture models model-based reinforcement learning on-line EM Robot kinematics |
title | A Model Based Reinforcement Learning Approach Using On-Line Clustering |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-09T09%3A05%3A46IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-ieee_6IE&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=A%20Model%20Based%20Reinforcement%20Learning%20Approach%20Using%20On-Line%20Clustering&rft.btitle=2012%20IEEE%2024th%20International%20Conference%20on%20Tools%20with%20Artificial%20Intelligence&rft.au=Tziortziotis,%20N.&rft.date=2012-11&rft.volume=1&rft.spage=712&rft.epage=718&rft.pages=712-718&rft.issn=1082-3409&rft.eissn=2375-0197&rft.isbn=1479902276&rft.isbn_list=9781479902279&rft.coden=IEEPAD&rft_id=info:doi/10.1109/ICTAI.2012.101&rft_dat=%3Cieee_6IE%3E6495113%3C/ieee_6IE%3E%3Curl%3E%3C/url%3E&rft.eisbn=0769549152&rft.eisbn_list=9780769549156&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=6495113&rfr_iscdi=true |