A Study of State Aliasing in Structured Prediction with RNNs

End-to-end reinforcement learning agents learn a state representation and a policy at the same time. Recurrent neural networks (RNNs) have been trained successfully as reinforcement learning agents in settings like dialogue that require structured prediction. In this paper, we investigate the repres...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Asri, Layla El, Trischler, Adam
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Asri, Layla El
Trischler, Adam
description End-to-end reinforcement learning agents learn a state representation and a policy at the same time. Recurrent neural networks (RNNs) have been trained successfully as reinforcement learning agents in settings like dialogue that require structured prediction. In this paper, we investigate the representations learned by RNN-based agents when trained with both policy gradient and value-based methods. We show through extensive experiments and analysis that, when trained with policy gradient, recurrent neural networks often fail to learn a state representation that leads to an optimal policy in settings where the same action should be taken at different states. To explain this failure, we highlight the problem of state aliasing, which entails conflating two or more distinct states in the representation space. We demonstrate that state aliasing occurs when several states share the same optimal action and the agent is trained via policy gradient. We characterize this phenomenon through experiments on a simple maze setting and a more complex text-based game, and make recommendations for training RNNs with reinforcement learning.
doi_str_mv 10.48550/arxiv.1906.09310
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_1906_09310</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>1906_09310</sourcerecordid><originalsourceid>FETCH-LOGICAL-a670-38945b8a2c12ff0b860823fb938e4da5421cdc501ef0c21405309c3b8bee66833</originalsourceid><addsrcrecordid>eNotj8FOAyEYhLl4MNUH8CQvsNsffkBIvGwatSZNNdr7BlhQknZrWFbt23dtvcxM5jCZj5AbBrXQUsLc5t_0XTMDqgaDDC7JfUPfy9gd6D5OwZZAm22yQ-o_aOqnJo--jDl09HWS5Eva9_QnlU_6tl4PV-Qi2u0Qrv99RjaPD5vFslq9PD0vmlVl1R1UqI2QTlvuGY8RnFagOUZnUAfRWSk4852XwEIEz5kAiWA8Ou1CUEojzsjtefZ0v_3KaWfzof3DaE8YeAQfYUEB</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>A Study of State Aliasing in Structured Prediction with RNNs</title><source>arXiv.org</source><creator>Asri, Layla El ; Trischler, Adam</creator><creatorcontrib>Asri, Layla El ; Trischler, Adam</creatorcontrib><description>End-to-end reinforcement learning agents learn a state representation and a policy at the same time. Recurrent neural networks (RNNs) have been trained successfully as reinforcement learning agents in settings like dialogue that require structured prediction. In this paper, we investigate the representations learned by RNN-based agents when trained with both policy gradient and value-based methods. We show through extensive experiments and analysis that, when trained with policy gradient, recurrent neural networks often fail to learn a state representation that leads to an optimal policy in settings where the same action should be taken at different states. To explain this failure, we highlight the problem of state aliasing, which entails conflating two or more distinct states in the representation space. We demonstrate that state aliasing occurs when several states share the same optimal action and the agent is trained via policy gradient. We characterize this phenomenon through experiments on a simple maze setting and a more complex text-based game, and make recommendations for training RNNs with reinforcement learning.</description><identifier>DOI: 10.48550/arxiv.1906.09310</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Computation and Language ; Computer Science - Learning</subject><creationdate>2019-06</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/1906.09310$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.1906.09310$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Asri, Layla El</creatorcontrib><creatorcontrib>Trischler, Adam</creatorcontrib><title>A Study of State Aliasing in Structured Prediction with RNNs</title><description>End-to-end reinforcement learning agents learn a state representation and a policy at the same time. Recurrent neural networks (RNNs) have been trained successfully as reinforcement learning agents in settings like dialogue that require structured prediction. In this paper, we investigate the representations learned by RNN-based agents when trained with both policy gradient and value-based methods. We show through extensive experiments and analysis that, when trained with policy gradient, recurrent neural networks often fail to learn a state representation that leads to an optimal policy in settings where the same action should be taken at different states. To explain this failure, we highlight the problem of state aliasing, which entails conflating two or more distinct states in the representation space. We demonstrate that state aliasing occurs when several states share the same optimal action and the agent is trained via policy gradient. We characterize this phenomenon through experiments on a simple maze setting and a more complex text-based game, and make recommendations for training RNNs with reinforcement learning.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Computation and Language</subject><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2019</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj8FOAyEYhLl4MNUH8CQvsNsffkBIvGwatSZNNdr7BlhQknZrWFbt23dtvcxM5jCZj5AbBrXQUsLc5t_0XTMDqgaDDC7JfUPfy9gd6D5OwZZAm22yQ-o_aOqnJo--jDl09HWS5Eva9_QnlU_6tl4PV-Qi2u0Qrv99RjaPD5vFslq9PD0vmlVl1R1UqI2QTlvuGY8RnFagOUZnUAfRWSk4852XwEIEz5kAiWA8Ou1CUEojzsjtefZ0v_3KaWfzof3DaE8YeAQfYUEB</recordid><startdate>20190621</startdate><enddate>20190621</enddate><creator>Asri, Layla El</creator><creator>Trischler, Adam</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20190621</creationdate><title>A Study of State Aliasing in Structured Prediction with RNNs</title><author>Asri, Layla El ; Trischler, Adam</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a670-38945b8a2c12ff0b860823fb938e4da5421cdc501ef0c21405309c3b8bee66833</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2019</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Computation and Language</topic><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Asri, Layla El</creatorcontrib><creatorcontrib>Trischler, Adam</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Asri, Layla El</au><au>Trischler, Adam</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>A Study of State Aliasing in Structured Prediction with RNNs</atitle><date>2019-06-21</date><risdate>2019</risdate><abstract>End-to-end reinforcement learning agents learn a state representation and a policy at the same time. Recurrent neural networks (RNNs) have been trained successfully as reinforcement learning agents in settings like dialogue that require structured prediction. In this paper, we investigate the representations learned by RNN-based agents when trained with both policy gradient and value-based methods. We show through extensive experiments and analysis that, when trained with policy gradient, recurrent neural networks often fail to learn a state representation that leads to an optimal policy in settings where the same action should be taken at different states. To explain this failure, we highlight the problem of state aliasing, which entails conflating two or more distinct states in the representation space. We demonstrate that state aliasing occurs when several states share the same optimal action and the agent is trained via policy gradient. We characterize this phenomenon through experiments on a simple maze setting and a more complex text-based game, and make recommendations for training RNNs with reinforcement learning.</abstract><doi>10.48550/arxiv.1906.09310</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.1906.09310
ispartof
issn
language eng
recordid cdi_arxiv_primary_1906_09310
source arXiv.org
subjects Computer Science - Artificial Intelligence
Computer Science - Computation and Language
Computer Science - Learning
title A Study of State Aliasing in Structured Prediction with RNNs
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-05T04%3A11%3A38IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=A%20Study%20of%20State%20Aliasing%20in%20Structured%20Prediction%20with%20RNNs&rft.au=Asri,%20Layla%20El&rft.date=2019-06-21&rft_id=info:doi/10.48550/arxiv.1906.09310&rft_dat=%3Carxiv_GOX%3E1906_09310%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true