Control Systems and Reinforcement Learning

A high school student can create deep Q-learning code to control her robot, without any understanding of the meaning of 'deep' or 'Q', or why the code sometimes fails. This book is designed to explain the science behind reinforcement learning and optimal control in a way that is...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
1. Verfasser: Meyn, Sean
Format: Buch
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Meyn, Sean
description A high school student can create deep Q-learning code to control her robot, without any understanding of the meaning of 'deep' or 'Q', or why the code sometimes fails. This book is designed to explain the science behind reinforcement learning and optimal control in a way that is accessible to students with a background in calculus and matrix algebra. A unique focus is algorithm design to obtain the fastest possible speed of convergence for learning algorithms, along with insight into why reinforcement learning sometimes fails. Advanced stochastic process theory is avoided at the start by substituting random exploration with more intuitive deterministic probing for learning. Once these ideas are understood, it is not difficult to master techniques rooted in stochastic control. These topics are covered in the second part of the book, starting with Markov chain theory and ending with a fresh look at actor-critic methods for reinforcement learning.
doi_str_mv 10.1017/9781009051873
format Book
fullrecord <record><control><sourceid>proquest_askew</sourceid><recordid>TN_cdi_askewsholts_vlebooks_9781009063395</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><cupid>10_1017_9781009051873</cupid><sourcerecordid>EBC7029086</sourcerecordid><originalsourceid>FETCH-LOGICAL-a11302-b06fba187567ef8dae263986a904587fba7896e91931d039712ab9ac1d69069b3</originalsourceid><addsrcrecordid>eNpVkM1Lw0AQxVdEUWuP3nMThepMNvsxRw31AwqCitdlk2xqbJrV3aj43xvbingaHvPj8d5j7AjhDAHVOSmNAAQCteJb7OBPbA-CoxSIJGF3JTCjVAncY-MYXwAgFZQR4T47zX3XB98mD1-xd8uY2K5K7l3T1T6Ubum6Ppk5G7qmmx-yndq20Y03d8SerqaP-c1kdnd9m1_MJhaRQzopQNaFHYIIqVytK-tSyUlLS5AJrYaf0iQdIXGsgJPC1BZkS6wkgaSCj9jJ2tjGhfuMz77to_loXeH9Iprf3pJzEgN7vGZfg397d7E3K6wcggfbmullriAl0HIgYU2WdlmEppo7U_qwMUUwP6Oaf6Pyb_i_YvY</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>book</recordtype><pqid>EBC7029086</pqid></control><display><type>book</type><title>Control Systems and Reinforcement Learning</title><source>Cambridge Core All Books</source><creator>Meyn, Sean</creator><creatorcontrib>Meyn, Sean</creatorcontrib><description>A high school student can create deep Q-learning code to control her robot, without any understanding of the meaning of 'deep' or 'Q', or why the code sometimes fails. This book is designed to explain the science behind reinforcement learning and optimal control in a way that is accessible to students with a background in calculus and matrix algebra. A unique focus is algorithm design to obtain the fastest possible speed of convergence for learning algorithms, along with insight into why reinforcement learning sometimes fails. Advanced stochastic process theory is avoided at the start by substituting random exploration with more intuitive deterministic probing for learning. Once these ideas are understood, it is not difficult to master techniques rooted in stochastic control. These topics are covered in the second part of the book, starting with Markov chain theory and ending with a fresh look at actor-critic methods for reinforcement learning.</description><edition>1</edition><identifier>ISBN: 1316511960</identifier><identifier>ISBN: 9781316511961</identifier><identifier>EISBN: 1009051873</identifier><identifier>EISBN: 9781009051873</identifier><identifier>EISBN: 9781009063395</identifier><identifier>EISBN: 1009063391</identifier><identifier>DOI: 10.1017/9781009051873</identifier><identifier>OCLC: 1311492751</identifier><language>eng</language><publisher>Cambridge: Cambridge University Press</publisher><subject>Control theory ; Reinforcement learning</subject><creationdate>2022</creationdate><tpages>454</tpages><format>454</format><rights>Sean Meyn 2022</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>306,780,784,786,27924,54065</link.rule.ids></links><search><creatorcontrib>Meyn, Sean</creatorcontrib><title>Control Systems and Reinforcement Learning</title><description>A high school student can create deep Q-learning code to control her robot, without any understanding of the meaning of 'deep' or 'Q', or why the code sometimes fails. This book is designed to explain the science behind reinforcement learning and optimal control in a way that is accessible to students with a background in calculus and matrix algebra. A unique focus is algorithm design to obtain the fastest possible speed of convergence for learning algorithms, along with insight into why reinforcement learning sometimes fails. Advanced stochastic process theory is avoided at the start by substituting random exploration with more intuitive deterministic probing for learning. Once these ideas are understood, it is not difficult to master techniques rooted in stochastic control. These topics are covered in the second part of the book, starting with Markov chain theory and ending with a fresh look at actor-critic methods for reinforcement learning.</description><subject>Control theory</subject><subject>Reinforcement learning</subject><isbn>1316511960</isbn><isbn>9781316511961</isbn><isbn>1009051873</isbn><isbn>9781009051873</isbn><isbn>9781009063395</isbn><isbn>1009063391</isbn><fulltext>true</fulltext><rsrctype>book</rsrctype><creationdate>2022</creationdate><recordtype>book</recordtype><sourceid/><recordid>eNpVkM1Lw0AQxVdEUWuP3nMThepMNvsxRw31AwqCitdlk2xqbJrV3aj43xvbingaHvPj8d5j7AjhDAHVOSmNAAQCteJb7OBPbA-CoxSIJGF3JTCjVAncY-MYXwAgFZQR4T47zX3XB98mD1-xd8uY2K5K7l3T1T6Ubum6Ppk5G7qmmx-yndq20Y03d8SerqaP-c1kdnd9m1_MJhaRQzopQNaFHYIIqVytK-tSyUlLS5AJrYaf0iQdIXGsgJPC1BZkS6wkgaSCj9jJ2tjGhfuMz77to_loXeH9Iprf3pJzEgN7vGZfg397d7E3K6wcggfbmullriAl0HIgYU2WdlmEppo7U_qwMUUwP6Oaf6Pyb_i_YvY</recordid><startdate>20220517</startdate><enddate>20220517</enddate><creator>Meyn, Sean</creator><general>Cambridge University Press</general><scope/></search><sort><creationdate>20220517</creationdate><title>Control Systems and Reinforcement Learning</title><author>Meyn, Sean</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a11302-b06fba187567ef8dae263986a904587fba7896e91931d039712ab9ac1d69069b3</frbrgroupid><rsrctype>books</rsrctype><prefilter>books</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Control theory</topic><topic>Reinforcement learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Meyn, Sean</creatorcontrib></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Meyn, Sean</au><format>book</format><genre>book</genre><ristype>BOOK</ristype><btitle>Control Systems and Reinforcement Learning</btitle><date>2022-05-17</date><risdate>2022</risdate><isbn>1316511960</isbn><isbn>9781316511961</isbn><eisbn>1009051873</eisbn><eisbn>9781009051873</eisbn><eisbn>9781009063395</eisbn><eisbn>1009063391</eisbn><abstract>A high school student can create deep Q-learning code to control her robot, without any understanding of the meaning of 'deep' or 'Q', or why the code sometimes fails. This book is designed to explain the science behind reinforcement learning and optimal control in a way that is accessible to students with a background in calculus and matrix algebra. A unique focus is algorithm design to obtain the fastest possible speed of convergence for learning algorithms, along with insight into why reinforcement learning sometimes fails. Advanced stochastic process theory is avoided at the start by substituting random exploration with more intuitive deterministic probing for learning. Once these ideas are understood, it is not difficult to master techniques rooted in stochastic control. These topics are covered in the second part of the book, starting with Markov chain theory and ending with a fresh look at actor-critic methods for reinforcement learning.</abstract><cop>Cambridge</cop><pub>Cambridge University Press</pub><doi>10.1017/9781009051873</doi><oclcid>1311492751</oclcid><tpages>454</tpages><edition>1</edition><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISBN: 1316511960
ispartof
issn
language eng
recordid cdi_askewsholts_vlebooks_9781009063395
source Cambridge Core All Books
subjects Control theory
Reinforcement learning
title Control Systems and Reinforcement Learning
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-11T18%3A41%3A07IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_askew&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=book&rft.btitle=Control%20Systems%20and%20Reinforcement%20Learning&rft.au=Meyn,%20Sean&rft.date=2022-05-17&rft.isbn=1316511960&rft.isbn_list=9781316511961&rft_id=info:doi/10.1017/9781009051873&rft_dat=%3Cproquest_askew%3EEBC7029086%3C/proquest_askew%3E%3Curl%3E%3C/url%3E&rft.eisbn=1009051873&rft.eisbn_list=9781009051873&rft.eisbn_list=9781009063395&rft.eisbn_list=1009063391&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=EBC7029086&rft_id=info:pmid/&rft_cupid=10_1017_9781009051873&rfr_iscdi=true