Model-Free Reinforcement Learning by Embedding an Auxiliary System for Optimal Control of Nonlinear Systems
In this article, a novel integral reinforcement learning (IRL) algorithm is proposed to solve the optimal control problem for continuous-time nonlinear systems with unknown dynamics. The main challenging issue in learning is how to reject the oscillation caused by the externally added probing noise....
Gespeichert in:
Veröffentlicht in: | IEEE transaction on neural networks and learning systems 2022-04, Vol.33 (4), p.1520-1534 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 1534 |
---|---|
container_issue | 4 |
container_start_page | 1520 |
container_title | IEEE transaction on neural networks and learning systems |
container_volume | 33 |
creator | Xu, Zhenhui Shen, Tielong Cheng, Daizhan |
description | In this article, a novel integral reinforcement learning (IRL) algorithm is proposed to solve the optimal control problem for continuous-time nonlinear systems with unknown dynamics. The main challenging issue in learning is how to reject the oscillation caused by the externally added probing noise. This article challenges the issue by embedding an auxiliary trajectory that is designed as an exciting signal to learn the optimal solution. First, the auxiliary trajectory is used to decompose the state trajectory of the controlled system. Then, by using the decoupled trajectories, a model-free policy iteration (PI) algorithm is developed, where the policy evaluation step and the policy improvement step are alternated until convergence to the optimal solution. It is noted that an appropriate external input is introduced at the policy improvement step to eliminate the requirement of the input-to-state dynamics. Finally, the algorithm is implemented on the actor-critic structure. The output weights of the critic neural network (NN) and the actor NN are updated sequentially by the least-squares methods. The convergence of the algorithm and the stability of the closed-loop system are guaranteed. Two examples are given to show the effectiveness of the proposed algorithm. |
doi_str_mv | 10.1109/TNNLS.2020.3042589 |
format | Article |
fullrecord | <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_ieee_primary_9301237</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9301237</ieee_id><sourcerecordid>2647426887</sourcerecordid><originalsourceid>FETCH-LOGICAL-c351t-436a78c27020854af0f213f59a6dcc2fd43c1f1b9f376c3cdc936f12e6ed284b3</originalsourceid><addsrcrecordid>eNpdkUGLFDEQhYMo7rLuH1CQgBcvPSaVdDp9XIZdFcZZcFfwFtLpivTanYxJNzj_3owzzsFcKqG-90jVI-Q1ZyvOWfvhcbvdPKyAAVsJJqHW7TNyCVxBBULr5-d78_2CXOf8xMpRrFayfUkuhBCykVxdkp9fYo9jdZcQ6Vccgo_J4YRhphu0KQzhB-329HbqsO8PDxvozfJ7GAeb9vRhn2ecaNHQ-908THak6xjmFEcaPd3GMA6huJy4_Iq88HbMeH2qV-Tb3e3j-lO1uf_4eX2zqZyo-VxJoWyjHTRlNl1L65kHLnzdWtU7B76XwnHPu9aLRjnhetcK5Tmgwh607MQVeX_03aX4a8E8m2nIDsfRBoxLNiAb4FxoUAV99x_6FJcUyu8MqLIiUFo3hYIj5VLMOaE3u1SmTXvDmTmkYf6mYQ5pmFMaRfT2ZL10E_Znyb_dF-DNERgQ8dxuBeMlNfEH3_WOGw</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2647426887</pqid></control><display><type>article</type><title>Model-Free Reinforcement Learning by Embedding an Auxiliary System for Optimal Control of Nonlinear Systems</title><source>IEEE Electronic Library (IEL)</source><creator>Xu, Zhenhui ; Shen, Tielong ; Cheng, Daizhan</creator><creatorcontrib>Xu, Zhenhui ; Shen, Tielong ; Cheng, Daizhan</creatorcontrib><description>In this article, a novel integral reinforcement learning (IRL) algorithm is proposed to solve the optimal control problem for continuous-time nonlinear systems with unknown dynamics. The main challenging issue in learning is how to reject the oscillation caused by the externally added probing noise. This article challenges the issue by embedding an auxiliary trajectory that is designed as an exciting signal to learn the optimal solution. First, the auxiliary trajectory is used to decompose the state trajectory of the controlled system. Then, by using the decoupled trajectories, a model-free policy iteration (PI) algorithm is developed, where the policy evaluation step and the policy improvement step are alternated until convergence to the optimal solution. It is noted that an appropriate external input is introduced at the policy improvement step to eliminate the requirement of the input-to-state dynamics. Finally, the algorithm is implemented on the actor-critic structure. The output weights of the critic neural network (NN) and the actor NN are updated sequentially by the least-squares methods. The convergence of the algorithm and the stability of the closed-loop system are guaranteed. Two examples are given to show the effectiveness of the proposed algorithm.</description><identifier>ISSN: 2162-237X</identifier><identifier>EISSN: 2162-2388</identifier><identifier>DOI: 10.1109/TNNLS.2020.3042589</identifier><identifier>PMID: 33347416</identifier><identifier>CODEN: ITNNAL</identifier><language>eng</language><publisher>United States: IEEE</publisher><subject>Algorithms ; Approximate optimal control design ; Artificial neural networks ; auxiliary trajectory ; completely model-free ; Control systems ; Convergence ; Dynamical systems ; Embedding ; Feedback control ; Heuristic algorithms ; integral reinforcement learning (IRL) ; Iterative methods ; Learning ; Machine learning ; Mathematical model ; Neural networks ; Nonlinear control ; Nonlinear systems ; Optimal control ; Reinforcement ; System dynamics ; Trajectory ; Trajectory control</subject><ispartof>IEEE transaction on neural networks and learning systems, 2022-04, Vol.33 (4), p.1520-1534</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022</rights><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c351t-436a78c27020854af0f213f59a6dcc2fd43c1f1b9f376c3cdc936f12e6ed284b3</citedby><cites>FETCH-LOGICAL-c351t-436a78c27020854af0f213f59a6dcc2fd43c1f1b9f376c3cdc936f12e6ed284b3</cites><orcidid>0000-0001-5088-3209 ; 0000-0003-1378-6164 ; 0000-0002-2183-9978</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9301237$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,796,27924,27925,54758</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/9301237$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/33347416$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Xu, Zhenhui</creatorcontrib><creatorcontrib>Shen, Tielong</creatorcontrib><creatorcontrib>Cheng, Daizhan</creatorcontrib><title>Model-Free Reinforcement Learning by Embedding an Auxiliary System for Optimal Control of Nonlinear Systems</title><title>IEEE transaction on neural networks and learning systems</title><addtitle>TNNLS</addtitle><addtitle>IEEE Trans Neural Netw Learn Syst</addtitle><description>In this article, a novel integral reinforcement learning (IRL) algorithm is proposed to solve the optimal control problem for continuous-time nonlinear systems with unknown dynamics. The main challenging issue in learning is how to reject the oscillation caused by the externally added probing noise. This article challenges the issue by embedding an auxiliary trajectory that is designed as an exciting signal to learn the optimal solution. First, the auxiliary trajectory is used to decompose the state trajectory of the controlled system. Then, by using the decoupled trajectories, a model-free policy iteration (PI) algorithm is developed, where the policy evaluation step and the policy improvement step are alternated until convergence to the optimal solution. It is noted that an appropriate external input is introduced at the policy improvement step to eliminate the requirement of the input-to-state dynamics. Finally, the algorithm is implemented on the actor-critic structure. The output weights of the critic neural network (NN) and the actor NN are updated sequentially by the least-squares methods. The convergence of the algorithm and the stability of the closed-loop system are guaranteed. Two examples are given to show the effectiveness of the proposed algorithm.</description><subject>Algorithms</subject><subject>Approximate optimal control design</subject><subject>Artificial neural networks</subject><subject>auxiliary trajectory</subject><subject>completely model-free</subject><subject>Control systems</subject><subject>Convergence</subject><subject>Dynamical systems</subject><subject>Embedding</subject><subject>Feedback control</subject><subject>Heuristic algorithms</subject><subject>integral reinforcement learning (IRL)</subject><subject>Iterative methods</subject><subject>Learning</subject><subject>Machine learning</subject><subject>Mathematical model</subject><subject>Neural networks</subject><subject>Nonlinear control</subject><subject>Nonlinear systems</subject><subject>Optimal control</subject><subject>Reinforcement</subject><subject>System dynamics</subject><subject>Trajectory</subject><subject>Trajectory control</subject><issn>2162-237X</issn><issn>2162-2388</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNpdkUGLFDEQhYMo7rLuH1CQgBcvPSaVdDp9XIZdFcZZcFfwFtLpivTanYxJNzj_3owzzsFcKqG-90jVI-Q1ZyvOWfvhcbvdPKyAAVsJJqHW7TNyCVxBBULr5-d78_2CXOf8xMpRrFayfUkuhBCykVxdkp9fYo9jdZcQ6Vccgo_J4YRhphu0KQzhB-329HbqsO8PDxvozfJ7GAeb9vRhn2ecaNHQ-908THak6xjmFEcaPd3GMA6huJy4_Iq88HbMeH2qV-Tb3e3j-lO1uf_4eX2zqZyo-VxJoWyjHTRlNl1L65kHLnzdWtU7B76XwnHPu9aLRjnhetcK5Tmgwh607MQVeX_03aX4a8E8m2nIDsfRBoxLNiAb4FxoUAV99x_6FJcUyu8MqLIiUFo3hYIj5VLMOaE3u1SmTXvDmTmkYf6mYQ5pmFMaRfT2ZL10E_Znyb_dF-DNERgQ8dxuBeMlNfEH3_WOGw</recordid><startdate>20220401</startdate><enddate>20220401</enddate><creator>Xu, Zhenhui</creator><creator>Shen, Tielong</creator><creator>Cheng, Daizhan</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7QF</scope><scope>7QO</scope><scope>7QP</scope><scope>7QQ</scope><scope>7QR</scope><scope>7SC</scope><scope>7SE</scope><scope>7SP</scope><scope>7SR</scope><scope>7TA</scope><scope>7TB</scope><scope>7TK</scope><scope>7U5</scope><scope>8BQ</scope><scope>8FD</scope><scope>F28</scope><scope>FR3</scope><scope>H8D</scope><scope>JG9</scope><scope>JQ2</scope><scope>KR7</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>P64</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0001-5088-3209</orcidid><orcidid>https://orcid.org/0000-0003-1378-6164</orcidid><orcidid>https://orcid.org/0000-0002-2183-9978</orcidid></search><sort><creationdate>20220401</creationdate><title>Model-Free Reinforcement Learning by Embedding an Auxiliary System for Optimal Control of Nonlinear Systems</title><author>Xu, Zhenhui ; Shen, Tielong ; Cheng, Daizhan</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c351t-436a78c27020854af0f213f59a6dcc2fd43c1f1b9f376c3cdc936f12e6ed284b3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Algorithms</topic><topic>Approximate optimal control design</topic><topic>Artificial neural networks</topic><topic>auxiliary trajectory</topic><topic>completely model-free</topic><topic>Control systems</topic><topic>Convergence</topic><topic>Dynamical systems</topic><topic>Embedding</topic><topic>Feedback control</topic><topic>Heuristic algorithms</topic><topic>integral reinforcement learning (IRL)</topic><topic>Iterative methods</topic><topic>Learning</topic><topic>Machine learning</topic><topic>Mathematical model</topic><topic>Neural networks</topic><topic>Nonlinear control</topic><topic>Nonlinear systems</topic><topic>Optimal control</topic><topic>Reinforcement</topic><topic>System dynamics</topic><topic>Trajectory</topic><topic>Trajectory control</topic><toplevel>online_resources</toplevel><creatorcontrib>Xu, Zhenhui</creatorcontrib><creatorcontrib>Shen, Tielong</creatorcontrib><creatorcontrib>Cheng, Daizhan</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>Aluminium Industry Abstracts</collection><collection>Biotechnology Research Abstracts</collection><collection>Calcium & Calcified Tissue Abstracts</collection><collection>Ceramic Abstracts</collection><collection>Chemoreception Abstracts</collection><collection>Computer and Information Systems Abstracts</collection><collection>Corrosion Abstracts</collection><collection>Electronics & Communications Abstracts</collection><collection>Engineered Materials Abstracts</collection><collection>Materials Business File</collection><collection>Mechanical & Transportation Engineering Abstracts</collection><collection>Neurosciences Abstracts</collection><collection>Solid State and Superconductivity Abstracts</collection><collection>METADEX</collection><collection>Technology Research Database</collection><collection>ANTE: Abstracts in New Technology & Engineering</collection><collection>Engineering Research Database</collection><collection>Aerospace Database</collection><collection>Materials Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Civil Engineering Abstracts</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>Biotechnology and BioEngineering Abstracts</collection><collection>MEDLINE - Academic</collection><jtitle>IEEE transaction on neural networks and learning systems</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Xu, Zhenhui</au><au>Shen, Tielong</au><au>Cheng, Daizhan</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Model-Free Reinforcement Learning by Embedding an Auxiliary System for Optimal Control of Nonlinear Systems</atitle><jtitle>IEEE transaction on neural networks and learning systems</jtitle><stitle>TNNLS</stitle><addtitle>IEEE Trans Neural Netw Learn Syst</addtitle><date>2022-04-01</date><risdate>2022</risdate><volume>33</volume><issue>4</issue><spage>1520</spage><epage>1534</epage><pages>1520-1534</pages><issn>2162-237X</issn><eissn>2162-2388</eissn><coden>ITNNAL</coden><abstract>In this article, a novel integral reinforcement learning (IRL) algorithm is proposed to solve the optimal control problem for continuous-time nonlinear systems with unknown dynamics. The main challenging issue in learning is how to reject the oscillation caused by the externally added probing noise. This article challenges the issue by embedding an auxiliary trajectory that is designed as an exciting signal to learn the optimal solution. First, the auxiliary trajectory is used to decompose the state trajectory of the controlled system. Then, by using the decoupled trajectories, a model-free policy iteration (PI) algorithm is developed, where the policy evaluation step and the policy improvement step are alternated until convergence to the optimal solution. It is noted that an appropriate external input is introduced at the policy improvement step to eliminate the requirement of the input-to-state dynamics. Finally, the algorithm is implemented on the actor-critic structure. The output weights of the critic neural network (NN) and the actor NN are updated sequentially by the least-squares methods. The convergence of the algorithm and the stability of the closed-loop system are guaranteed. Two examples are given to show the effectiveness of the proposed algorithm.</abstract><cop>United States</cop><pub>IEEE</pub><pmid>33347416</pmid><doi>10.1109/TNNLS.2020.3042589</doi><tpages>15</tpages><orcidid>https://orcid.org/0000-0001-5088-3209</orcidid><orcidid>https://orcid.org/0000-0003-1378-6164</orcidid><orcidid>https://orcid.org/0000-0002-2183-9978</orcidid></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | ISSN: 2162-237X |
ispartof | IEEE transaction on neural networks and learning systems, 2022-04, Vol.33 (4), p.1520-1534 |
issn | 2162-237X 2162-2388 |
language | eng |
recordid | cdi_ieee_primary_9301237 |
source | IEEE Electronic Library (IEL) |
subjects | Algorithms Approximate optimal control design Artificial neural networks auxiliary trajectory completely model-free Control systems Convergence Dynamical systems Embedding Feedback control Heuristic algorithms integral reinforcement learning (IRL) Iterative methods Learning Machine learning Mathematical model Neural networks Nonlinear control Nonlinear systems Optimal control Reinforcement System dynamics Trajectory Trajectory control |
title | Model-Free Reinforcement Learning by Embedding an Auxiliary System for Optimal Control of Nonlinear Systems |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-21T04%3A42%3A46IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Model-Free%20Reinforcement%20Learning%20by%20Embedding%20an%20Auxiliary%20System%20for%20Optimal%20Control%20of%20Nonlinear%20Systems&rft.jtitle=IEEE%20transaction%20on%20neural%20networks%20and%20learning%20systems&rft.au=Xu,%20Zhenhui&rft.date=2022-04-01&rft.volume=33&rft.issue=4&rft.spage=1520&rft.epage=1534&rft.pages=1520-1534&rft.issn=2162-237X&rft.eissn=2162-2388&rft.coden=ITNNAL&rft_id=info:doi/10.1109/TNNLS.2020.3042589&rft_dat=%3Cproquest_RIE%3E2647426887%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2647426887&rft_id=info:pmid/33347416&rft_ieee_id=9301237&rfr_iscdi=true |