DATA-EFFICIENT REINFORCEMENT LEARNING FOR CONTINUOUS CONTROL TASKS

Methods, systems, and apparatus, including computer programs encoded on computer storage media, for data-efficient reinforcement learning. One of the systems is a system for training an actor neural network used to select actions to be performed by an agent that interacts with an environment by rece...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: POPOV IVAYLO, VECERIK MATEJ, HEESS NICOLAS M O, LAMPE THOMAS, HAFNER ROLAND, RIEDMILLER MARTIN, LILLICRAP TIMOTHY P, BARTH-MARON GABRIEL
Format: Patent
Sprache:chi ; eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator POPOV IVAYLO
VECERIK MATEJ
HEESS NICOLAS M O
LAMPE THOMAS
HAFNER ROLAND
RIEDMILLER MARTIN
LILLICRAP TIMOTHY P
BARTH-MARON GABRIEL
description Methods, systems, and apparatus, including computer programs encoded on computer storage media, for data-efficient reinforcement learning. One of the systems is a system for training an actor neural network used to select actions to be performed by an agent that interacts with an environment by receiving observations characterizing states of the environment and, in response to each observation, performing an action selected from a continuous space of possible actions, wherein the actor neural network maps observations to next actions in accordance with values of parameters of the actor neuralnetwork, and wherein the system comprises: a plurality of workers, wherein each worker is configured to operate independently of each other worker, wherein each worker is associated with a respectiveagent replica that interacts with a respective replica of the environment during the training of the actor neural network. 用于数据高效的强化学习的方法、系统和装置,包括编码在计算机存储介质上的计算机程序。所述系统之一是一种用于训练行动者神经网络的系统,所述行动者神经网络用于选择要由通过接收表征环境的状态的观察并且响应于每个观察
format Patent
fullrecord <record><control><sourceid>epo_EVB</sourceid><recordid>TN_cdi_epo_espacenet_CN110383298A</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>CN110383298A</sourcerecordid><originalsourceid>FETCH-epo_espacenet_CN110383298A3</originalsourceid><addsrcrecordid>eNrjZHBycQxx1HV1c_N09nT1C1EIcvX0c_MPcnb1BfF8XB2D_Dz93BWAQgrO_n4hnn6h_qHBYGaQv49CiGOwdzAPA2taYk5xKi-U5mZQdHMNcfbQTS3Ij08tLkhMTs1LLYl39jM0NDC2MDaytHA0JkYNAB9ZK4k</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>patent</recordtype></control><display><type>patent</type><title>DATA-EFFICIENT REINFORCEMENT LEARNING FOR CONTINUOUS CONTROL TASKS</title><source>esp@cenet</source><creator>POPOV IVAYLO ; VECERIK MATEJ ; HEESS NICOLAS M O ; LAMPE THOMAS ; HAFNER ROLAND ; RIEDMILLER MARTIN ; LILLICRAP TIMOTHY P ; BARTH-MARON GABRIEL</creator><creatorcontrib>POPOV IVAYLO ; VECERIK MATEJ ; HEESS NICOLAS M O ; LAMPE THOMAS ; HAFNER ROLAND ; RIEDMILLER MARTIN ; LILLICRAP TIMOTHY P ; BARTH-MARON GABRIEL</creatorcontrib><description>Methods, systems, and apparatus, including computer programs encoded on computer storage media, for data-efficient reinforcement learning. One of the systems is a system for training an actor neural network used to select actions to be performed by an agent that interacts with an environment by receiving observations characterizing states of the environment and, in response to each observation, performing an action selected from a continuous space of possible actions, wherein the actor neural network maps observations to next actions in accordance with values of parameters of the actor neuralnetwork, and wherein the system comprises: a plurality of workers, wherein each worker is configured to operate independently of each other worker, wherein each worker is associated with a respectiveagent replica that interacts with a respective replica of the environment during the training of the actor neural network. 用于数据高效的强化学习的方法、系统和装置,包括编码在计算机存储介质上的计算机程序。所述系统之一是一种用于训练行动者神经网络的系统,所述行动者神经网络用于选择要由通过接收表征环境的状态的观察并且响应于每个观察</description><language>chi ; eng</language><subject>CALCULATING ; COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS ; COMPUTING ; COUNTING ; PHYSICS</subject><creationdate>2019</creationdate><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20191025&amp;DB=EPODOC&amp;CC=CN&amp;NR=110383298A$$EHTML$$P50$$Gepo$$Hfree_for_read</linktohtml><link.rule.ids>230,308,776,881,25542,76289</link.rule.ids><linktorsrc>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20191025&amp;DB=EPODOC&amp;CC=CN&amp;NR=110383298A$$EView_record_in_European_Patent_Office$$FView_record_in_$$GEuropean_Patent_Office$$Hfree_for_read</linktorsrc></links><search><creatorcontrib>POPOV IVAYLO</creatorcontrib><creatorcontrib>VECERIK MATEJ</creatorcontrib><creatorcontrib>HEESS NICOLAS M O</creatorcontrib><creatorcontrib>LAMPE THOMAS</creatorcontrib><creatorcontrib>HAFNER ROLAND</creatorcontrib><creatorcontrib>RIEDMILLER MARTIN</creatorcontrib><creatorcontrib>LILLICRAP TIMOTHY P</creatorcontrib><creatorcontrib>BARTH-MARON GABRIEL</creatorcontrib><title>DATA-EFFICIENT REINFORCEMENT LEARNING FOR CONTINUOUS CONTROL TASKS</title><description>Methods, systems, and apparatus, including computer programs encoded on computer storage media, for data-efficient reinforcement learning. One of the systems is a system for training an actor neural network used to select actions to be performed by an agent that interacts with an environment by receiving observations characterizing states of the environment and, in response to each observation, performing an action selected from a continuous space of possible actions, wherein the actor neural network maps observations to next actions in accordance with values of parameters of the actor neuralnetwork, and wherein the system comprises: a plurality of workers, wherein each worker is configured to operate independently of each other worker, wherein each worker is associated with a respectiveagent replica that interacts with a respective replica of the environment during the training of the actor neural network. 用于数据高效的强化学习的方法、系统和装置,包括编码在计算机存储介质上的计算机程序。所述系统之一是一种用于训练行动者神经网络的系统,所述行动者神经网络用于选择要由通过接收表征环境的状态的观察并且响应于每个观察</description><subject>CALCULATING</subject><subject>COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS</subject><subject>COMPUTING</subject><subject>COUNTING</subject><subject>PHYSICS</subject><fulltext>true</fulltext><rsrctype>patent</rsrctype><creationdate>2019</creationdate><recordtype>patent</recordtype><sourceid>EVB</sourceid><recordid>eNrjZHBycQxx1HV1c_N09nT1C1EIcvX0c_MPcnb1BfF8XB2D_Dz93BWAQgrO_n4hnn6h_qHBYGaQv49CiGOwdzAPA2taYk5xKi-U5mZQdHMNcfbQTS3Ij08tLkhMTs1LLYl39jM0NDC2MDaytHA0JkYNAB9ZK4k</recordid><startdate>20191025</startdate><enddate>20191025</enddate><creator>POPOV IVAYLO</creator><creator>VECERIK MATEJ</creator><creator>HEESS NICOLAS M O</creator><creator>LAMPE THOMAS</creator><creator>HAFNER ROLAND</creator><creator>RIEDMILLER MARTIN</creator><creator>LILLICRAP TIMOTHY P</creator><creator>BARTH-MARON GABRIEL</creator><scope>EVB</scope></search><sort><creationdate>20191025</creationdate><title>DATA-EFFICIENT REINFORCEMENT LEARNING FOR CONTINUOUS CONTROL TASKS</title><author>POPOV IVAYLO ; VECERIK MATEJ ; HEESS NICOLAS M O ; LAMPE THOMAS ; HAFNER ROLAND ; RIEDMILLER MARTIN ; LILLICRAP TIMOTHY P ; BARTH-MARON GABRIEL</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-epo_espacenet_CN110383298A3</frbrgroupid><rsrctype>patents</rsrctype><prefilter>patents</prefilter><language>chi ; eng</language><creationdate>2019</creationdate><topic>CALCULATING</topic><topic>COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS</topic><topic>COMPUTING</topic><topic>COUNTING</topic><topic>PHYSICS</topic><toplevel>online_resources</toplevel><creatorcontrib>POPOV IVAYLO</creatorcontrib><creatorcontrib>VECERIK MATEJ</creatorcontrib><creatorcontrib>HEESS NICOLAS M O</creatorcontrib><creatorcontrib>LAMPE THOMAS</creatorcontrib><creatorcontrib>HAFNER ROLAND</creatorcontrib><creatorcontrib>RIEDMILLER MARTIN</creatorcontrib><creatorcontrib>LILLICRAP TIMOTHY P</creatorcontrib><creatorcontrib>BARTH-MARON GABRIEL</creatorcontrib><collection>esp@cenet</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>POPOV IVAYLO</au><au>VECERIK MATEJ</au><au>HEESS NICOLAS M O</au><au>LAMPE THOMAS</au><au>HAFNER ROLAND</au><au>RIEDMILLER MARTIN</au><au>LILLICRAP TIMOTHY P</au><au>BARTH-MARON GABRIEL</au><format>patent</format><genre>patent</genre><ristype>GEN</ristype><title>DATA-EFFICIENT REINFORCEMENT LEARNING FOR CONTINUOUS CONTROL TASKS</title><date>2019-10-25</date><risdate>2019</risdate><abstract>Methods, systems, and apparatus, including computer programs encoded on computer storage media, for data-efficient reinforcement learning. One of the systems is a system for training an actor neural network used to select actions to be performed by an agent that interacts with an environment by receiving observations characterizing states of the environment and, in response to each observation, performing an action selected from a continuous space of possible actions, wherein the actor neural network maps observations to next actions in accordance with values of parameters of the actor neuralnetwork, and wherein the system comprises: a plurality of workers, wherein each worker is configured to operate independently of each other worker, wherein each worker is associated with a respectiveagent replica that interacts with a respective replica of the environment during the training of the actor neural network. 用于数据高效的强化学习的方法、系统和装置,包括编码在计算机存储介质上的计算机程序。所述系统之一是一种用于训练行动者神经网络的系统,所述行动者神经网络用于选择要由通过接收表征环境的状态的观察并且响应于每个观察</abstract><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier
ispartof
issn
language chi ; eng
recordid cdi_epo_espacenet_CN110383298A
source esp@cenet
subjects CALCULATING
COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
COMPUTING
COUNTING
PHYSICS
title DATA-EFFICIENT REINFORCEMENT LEARNING FOR CONTINUOUS CONTROL TASKS
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-08T06%3A39%3A55IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-epo_EVB&rft_val_fmt=info:ofi/fmt:kev:mtx:patent&rft.genre=patent&rft.au=POPOV%20IVAYLO&rft.date=2019-10-25&rft_id=info:doi/&rft_dat=%3Cepo_EVB%3ECN110383298A%3C/epo_EVB%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true