JOINTLY LEARNING EXPLORATORY AND NON-EXPLORATORY ACTION SELECTION POLICIES

Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training an action selection neural network that is used to select actions to be performed by an agent interacting with an environment. In one aspect, the method comprises: receiving an observation...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Sprechmann, Pablo, Vitvitskyi, Alex, Guo, Zhaohan, Piot, Bilal, Badia, Adrià Puigdomènech, Kapturowski, Steven James, Blundell, Charles, Tieleman, Olivier
Format: Patent
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Sprechmann, Pablo
Vitvitskyi, Alex
Guo, Zhaohan
Piot, Bilal
Badia, Adrià Puigdomènech
Kapturowski, Steven James
Blundell, Charles
Tieleman, Olivier
description Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training an action selection neural network that is used to select actions to be performed by an agent interacting with an environment. In one aspect, the method comprises: receiving an observation characterizing a current state of the environment; processing the observation and an exploration importance factor using the action selection neural network to generate an action selection output; selecting an action to be performed by the agent using the action selection output; determining an exploration reward; determining an overall reward based on: (i) the exploration importance factor, and (ii) the exploration reward; and training the action selection neural network using a reinforcement learning technique based on the overall reward.
format Patent
fullrecord <record><control><sourceid>epo_EVB</sourceid><recordid>TN_cdi_epo_espacenet_US2020372366A1</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>US2020372366A1</sourcerecordid><originalsourceid>FETCH-epo_espacenet_US2020372366A13</originalsourceid><addsrcrecordid>eNrjZPDy8vf0C_GJVPBxdQzy8_RzV3CNCPDxD3IM8Q-KVHD0c1Hw8_fTRRFzDvH091MIdvVxhbAC_H08nT1dg3kYWNMSc4pTeaE0N4Oym2uIs4duakF-fGpxQWJyal5qSXxosJGBkYGxuZGxmZmjoTFxqgBpyS71</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>patent</recordtype></control><display><type>patent</type><title>JOINTLY LEARNING EXPLORATORY AND NON-EXPLORATORY ACTION SELECTION POLICIES</title><source>esp@cenet</source><creator>Sprechmann, Pablo ; Vitvitskyi, Alex ; Guo, Zhaohan ; Piot, Bilal ; Badia, Adrià Puigdomènech ; Kapturowski, Steven James ; Blundell, Charles ; Tieleman, Olivier</creator><creatorcontrib>Sprechmann, Pablo ; Vitvitskyi, Alex ; Guo, Zhaohan ; Piot, Bilal ; Badia, Adrià Puigdomènech ; Kapturowski, Steven James ; Blundell, Charles ; Tieleman, Olivier</creatorcontrib><description>Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training an action selection neural network that is used to select actions to be performed by an agent interacting with an environment. In one aspect, the method comprises: receiving an observation characterizing a current state of the environment; processing the observation and an exploration importance factor using the action selection neural network to generate an action selection output; selecting an action to be performed by the agent using the action selection output; determining an exploration reward; determining an overall reward based on: (i) the exploration importance factor, and (ii) the exploration reward; and training the action selection neural network using a reinforcement learning technique based on the overall reward.</description><language>eng</language><subject>CALCULATING ; COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS ; COMPUTING ; COUNTING ; PHYSICS</subject><creationdate>2020</creationdate><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20201126&amp;DB=EPODOC&amp;CC=US&amp;NR=2020372366A1$$EHTML$$P50$$Gepo$$Hfree_for_read</linktohtml><link.rule.ids>230,308,780,885,25562,76317</link.rule.ids><linktorsrc>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20201126&amp;DB=EPODOC&amp;CC=US&amp;NR=2020372366A1$$EView_record_in_European_Patent_Office$$FView_record_in_$$GEuropean_Patent_Office$$Hfree_for_read</linktorsrc></links><search><creatorcontrib>Sprechmann, Pablo</creatorcontrib><creatorcontrib>Vitvitskyi, Alex</creatorcontrib><creatorcontrib>Guo, Zhaohan</creatorcontrib><creatorcontrib>Piot, Bilal</creatorcontrib><creatorcontrib>Badia, Adrià Puigdomènech</creatorcontrib><creatorcontrib>Kapturowski, Steven James</creatorcontrib><creatorcontrib>Blundell, Charles</creatorcontrib><creatorcontrib>Tieleman, Olivier</creatorcontrib><title>JOINTLY LEARNING EXPLORATORY AND NON-EXPLORATORY ACTION SELECTION POLICIES</title><description>Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training an action selection neural network that is used to select actions to be performed by an agent interacting with an environment. In one aspect, the method comprises: receiving an observation characterizing a current state of the environment; processing the observation and an exploration importance factor using the action selection neural network to generate an action selection output; selecting an action to be performed by the agent using the action selection output; determining an exploration reward; determining an overall reward based on: (i) the exploration importance factor, and (ii) the exploration reward; and training the action selection neural network using a reinforcement learning technique based on the overall reward.</description><subject>CALCULATING</subject><subject>COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS</subject><subject>COMPUTING</subject><subject>COUNTING</subject><subject>PHYSICS</subject><fulltext>true</fulltext><rsrctype>patent</rsrctype><creationdate>2020</creationdate><recordtype>patent</recordtype><sourceid>EVB</sourceid><recordid>eNrjZPDy8vf0C_GJVPBxdQzy8_RzV3CNCPDxD3IM8Q-KVHD0c1Hw8_fTRRFzDvH091MIdvVxhbAC_H08nT1dg3kYWNMSc4pTeaE0N4Oym2uIs4duakF-fGpxQWJyal5qSXxosJGBkYGxuZGxmZmjoTFxqgBpyS71</recordid><startdate>20201126</startdate><enddate>20201126</enddate><creator>Sprechmann, Pablo</creator><creator>Vitvitskyi, Alex</creator><creator>Guo, Zhaohan</creator><creator>Piot, Bilal</creator><creator>Badia, Adrià Puigdomènech</creator><creator>Kapturowski, Steven James</creator><creator>Blundell, Charles</creator><creator>Tieleman, Olivier</creator><scope>EVB</scope></search><sort><creationdate>20201126</creationdate><title>JOINTLY LEARNING EXPLORATORY AND NON-EXPLORATORY ACTION SELECTION POLICIES</title><author>Sprechmann, Pablo ; Vitvitskyi, Alex ; Guo, Zhaohan ; Piot, Bilal ; Badia, Adrià Puigdomènech ; Kapturowski, Steven James ; Blundell, Charles ; Tieleman, Olivier</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-epo_espacenet_US2020372366A13</frbrgroupid><rsrctype>patents</rsrctype><prefilter>patents</prefilter><language>eng</language><creationdate>2020</creationdate><topic>CALCULATING</topic><topic>COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS</topic><topic>COMPUTING</topic><topic>COUNTING</topic><topic>PHYSICS</topic><toplevel>online_resources</toplevel><creatorcontrib>Sprechmann, Pablo</creatorcontrib><creatorcontrib>Vitvitskyi, Alex</creatorcontrib><creatorcontrib>Guo, Zhaohan</creatorcontrib><creatorcontrib>Piot, Bilal</creatorcontrib><creatorcontrib>Badia, Adrià Puigdomènech</creatorcontrib><creatorcontrib>Kapturowski, Steven James</creatorcontrib><creatorcontrib>Blundell, Charles</creatorcontrib><creatorcontrib>Tieleman, Olivier</creatorcontrib><collection>esp@cenet</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Sprechmann, Pablo</au><au>Vitvitskyi, Alex</au><au>Guo, Zhaohan</au><au>Piot, Bilal</au><au>Badia, Adrià Puigdomènech</au><au>Kapturowski, Steven James</au><au>Blundell, Charles</au><au>Tieleman, Olivier</au><format>patent</format><genre>patent</genre><ristype>GEN</ristype><title>JOINTLY LEARNING EXPLORATORY AND NON-EXPLORATORY ACTION SELECTION POLICIES</title><date>2020-11-26</date><risdate>2020</risdate><abstract>Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training an action selection neural network that is used to select actions to be performed by an agent interacting with an environment. In one aspect, the method comprises: receiving an observation characterizing a current state of the environment; processing the observation and an exploration importance factor using the action selection neural network to generate an action selection output; selecting an action to be performed by the agent using the action selection output; determining an exploration reward; determining an overall reward based on: (i) the exploration importance factor, and (ii) the exploration reward; and training the action selection neural network using a reinforcement learning technique based on the overall reward.</abstract><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier
ispartof
issn
language eng
recordid cdi_epo_espacenet_US2020372366A1
source esp@cenet
subjects CALCULATING
COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
COMPUTING
COUNTING
PHYSICS
title JOINTLY LEARNING EXPLORATORY AND NON-EXPLORATORY ACTION SELECTION POLICIES
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-09T23%3A03%3A48IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-epo_EVB&rft_val_fmt=info:ofi/fmt:kev:mtx:patent&rft.genre=patent&rft.au=Sprechmann,%20Pablo&rft.date=2020-11-26&rft_id=info:doi/&rft_dat=%3Cepo_EVB%3EUS2020372366A1%3C/epo_EVB%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true