Epistemic Risk-Sensitive Reinforcement Learning
Proceedings of the 28th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN 2020) 339-344 We develop a framework for interacting with uncertain environments in reinforcement learning (RL) by leveraging preferences in the form of utility functions....
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Eriksson, Hannes Dimitrakakis, Christos |
description | Proceedings of the 28th European Symposium on Artificial Neural
Networks, Computational Intelligence and Machine Learning (ESANN 2020)
339-344 We develop a framework for interacting with uncertain environments in
reinforcement learning (RL) by leveraging preferences in the form of utility
functions. We claim that there is value in considering different risk measures
during learning. In this framework, the preference for risk can be tuned by
variation of the parameter $\beta$ and the resulting behavior can be
risk-averse, risk-neutral or risk-taking depending on the parameter choice. We
evaluate our framework for learning problems with model uncertainty. We measure
and control for \emph{epistemic} risk using dynamic programming (DP) and policy
gradient-based algorithms. The risk-averse behavior is then compared with the
behavior of the optimal risk-neutral policy in environments with epistemic
risk. |
doi_str_mv | 10.48550/arxiv.1906.06273 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_1906_06273</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>1906_06273</sourcerecordid><originalsourceid>FETCH-LOGICAL-a673-921ed9bf50148385f2d0fe232b776008ce14cc74264ed09ded8b8bf08523ad103</originalsourceid><addsrcrecordid>eNotzr1qwzAUQGEtGUraB-gUv4Cdq395LCFNC4ZCkt3I0lW4tFaCbELz9qVpp7MdPsaeOTTKaQ1rX77p2vAWTANGWPnA1tsLTTOOFKo9TZ_1AfNEM12x2iPldC4BR8xz1aEvmfLpkS2S_5rw6b9LdnzdHjdvdfexe9-8dLU3Vtat4BjbIWngykmnk4iQUEgxWGsAXECuQrBKGIUR2ojRDW5I4LSQPnKQS7b6297F_aXQ6Mut_5X3d7n8AftIPYo</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Epistemic Risk-Sensitive Reinforcement Learning</title><source>arXiv.org</source><creator>Eriksson, Hannes ; Dimitrakakis, Christos</creator><creatorcontrib>Eriksson, Hannes ; Dimitrakakis, Christos</creatorcontrib><description>Proceedings of the 28th European Symposium on Artificial Neural
Networks, Computational Intelligence and Machine Learning (ESANN 2020)
339-344 We develop a framework for interacting with uncertain environments in
reinforcement learning (RL) by leveraging preferences in the form of utility
functions. We claim that there is value in considering different risk measures
during learning. In this framework, the preference for risk can be tuned by
variation of the parameter $\beta$ and the resulting behavior can be
risk-averse, risk-neutral or risk-taking depending on the parameter choice. We
evaluate our framework for learning problems with model uncertainty. We measure
and control for \emph{epistemic} risk using dynamic programming (DP) and policy
gradient-based algorithms. The risk-averse behavior is then compared with the
behavior of the optimal risk-neutral policy in environments with epistemic
risk.</description><identifier>DOI: 10.48550/arxiv.1906.06273</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Learning ; Statistics - Machine Learning</subject><creationdate>2019-06</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,777,882</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/1906.06273$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.1906.06273$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Eriksson, Hannes</creatorcontrib><creatorcontrib>Dimitrakakis, Christos</creatorcontrib><title>Epistemic Risk-Sensitive Reinforcement Learning</title><description>Proceedings of the 28th European Symposium on Artificial Neural
Networks, Computational Intelligence and Machine Learning (ESANN 2020)
339-344 We develop a framework for interacting with uncertain environments in
reinforcement learning (RL) by leveraging preferences in the form of utility
functions. We claim that there is value in considering different risk measures
during learning. In this framework, the preference for risk can be tuned by
variation of the parameter $\beta$ and the resulting behavior can be
risk-averse, risk-neutral or risk-taking depending on the parameter choice. We
evaluate our framework for learning problems with model uncertainty. We measure
and control for \emph{epistemic} risk using dynamic programming (DP) and policy
gradient-based algorithms. The risk-averse behavior is then compared with the
behavior of the optimal risk-neutral policy in environments with epistemic
risk.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Learning</subject><subject>Statistics - Machine Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2019</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotzr1qwzAUQGEtGUraB-gUv4Cdq395LCFNC4ZCkt3I0lW4tFaCbELz9qVpp7MdPsaeOTTKaQ1rX77p2vAWTANGWPnA1tsLTTOOFKo9TZ_1AfNEM12x2iPldC4BR8xz1aEvmfLpkS2S_5rw6b9LdnzdHjdvdfexe9-8dLU3Vtat4BjbIWngykmnk4iQUEgxWGsAXECuQrBKGIUR2ojRDW5I4LSQPnKQS7b6297F_aXQ6Mut_5X3d7n8AftIPYo</recordid><startdate>20190614</startdate><enddate>20190614</enddate><creator>Eriksson, Hannes</creator><creator>Dimitrakakis, Christos</creator><scope>AKY</scope><scope>EPD</scope><scope>GOX</scope></search><sort><creationdate>20190614</creationdate><title>Epistemic Risk-Sensitive Reinforcement Learning</title><author>Eriksson, Hannes ; Dimitrakakis, Christos</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a673-921ed9bf50148385f2d0fe232b776008ce14cc74264ed09ded8b8bf08523ad103</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2019</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Learning</topic><topic>Statistics - Machine Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Eriksson, Hannes</creatorcontrib><creatorcontrib>Dimitrakakis, Christos</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv Statistics</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Eriksson, Hannes</au><au>Dimitrakakis, Christos</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Epistemic Risk-Sensitive Reinforcement Learning</atitle><date>2019-06-14</date><risdate>2019</risdate><abstract>Proceedings of the 28th European Symposium on Artificial Neural
Networks, Computational Intelligence and Machine Learning (ESANN 2020)
339-344 We develop a framework for interacting with uncertain environments in
reinforcement learning (RL) by leveraging preferences in the form of utility
functions. We claim that there is value in considering different risk measures
during learning. In this framework, the preference for risk can be tuned by
variation of the parameter $\beta$ and the resulting behavior can be
risk-averse, risk-neutral or risk-taking depending on the parameter choice. We
evaluate our framework for learning problems with model uncertainty. We measure
and control for \emph{epistemic} risk using dynamic programming (DP) and policy
gradient-based algorithms. The risk-averse behavior is then compared with the
behavior of the optimal risk-neutral policy in environments with epistemic
risk.</abstract><doi>10.48550/arxiv.1906.06273</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.1906.06273 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_1906_06273 |
source | arXiv.org |
subjects | Computer Science - Artificial Intelligence Computer Science - Learning Statistics - Machine Learning |
title | Epistemic Risk-Sensitive Reinforcement Learning |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-18T04%3A45%3A38IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Epistemic%20Risk-Sensitive%20Reinforcement%20Learning&rft.au=Eriksson,%20Hannes&rft.date=2019-06-14&rft_id=info:doi/10.48550/arxiv.1906.06273&rft_dat=%3Carxiv_GOX%3E1906_06273%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |