Recurrent Neural Network-based Internal Model Control design for stable nonlinear systems
Owing to their superior modeling capabilities, gated Recurrent Neural Networks, such as Gated Recurrent Units (GRUs) and Long Short-Term Memory networks (LSTMs), have become popular tools for learning dynamical systems. This paper aims to discuss how these networks can be adopted for the synthesis o...
Gespeichert in:
Veröffentlicht in: | arXiv.org 2022-03 |
---|---|
Hauptverfasser: | , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Bonassi, Fabio Scattolini, Riccardo |
description | Owing to their superior modeling capabilities, gated Recurrent Neural Networks, such as Gated Recurrent Units (GRUs) and Long Short-Term Memory networks (LSTMs), have become popular tools for learning dynamical systems. This paper aims to discuss how these networks can be adopted for the synthesis of Internal Model Control (IMC) architectures. To this end, first a gated recurrent network is used to learn a model of the unknown input-output stable plant. Then, a controller gated recurrent network is trained to approximate the model inverse. The stability of these networks, ensured by means of a suitable training procedure, allows to guarantee the input-output closed-loop stability. The proposed scheme is able to cope with the saturation of the control variables, and can be deployed on low-power embedded controllers, as it requires limited online computations. The approach is then tested on the Quadruple Tank benchmark system and compared to alternative control laws, resulting in remarkable closed-loop performances. |
doi_str_mv | 10.48550/arxiv.2108.04585 |
format | Article |
fullrecord | <record><control><sourceid>proquest_arxiv</sourceid><recordid>TN_cdi_arxiv_primary_2108_04585</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2560163804</sourcerecordid><originalsourceid>FETCH-LOGICAL-a524-f2267c0f540c807dca207db61c861f84a03347cfb7a261e1344ba3f9dc25bb3f3</originalsourceid><addsrcrecordid>eNotj01LAzEYhIMgWGp_gCcDnre--dqNR1n8KFQF6cXTkmQT2bpNapJV---7bb3MwDAM8yB0RWDOpRBwq-Jf9zOnBOQcuJDiDE0oY6SQnNILNEtpDQC0rKgQbII-3q0ZYrQ-41c7RNWPln9D_Cq0SrbFC59t9GP8Elrb4zr4HEOPW5u6T49diDhlpXuLffB9560ag13KdpMu0blTfbKzf5-i1ePDqn4ulm9Pi_p-WShBeeHoeMWAExyMhKo1io6qS2JkSZzkChjjlXG6UrQkljDOtWLurjVUaM0cm6Lr0-yRu9nGbqPirjnwN0f-sXFzamxj-B5sys06DAem1FBRAimZBM72jHBedA</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2560163804</pqid></control><display><type>article</type><title>Recurrent Neural Network-based Internal Model Control design for stable nonlinear systems</title><source>arXiv.org</source><source>Free E- Journals</source><creator>Bonassi, Fabio ; Scattolini, Riccardo</creator><creatorcontrib>Bonassi, Fabio ; Scattolini, Riccardo</creatorcontrib><description>Owing to their superior modeling capabilities, gated Recurrent Neural Networks, such as Gated Recurrent Units (GRUs) and Long Short-Term Memory networks (LSTMs), have become popular tools for learning dynamical systems. This paper aims to discuss how these networks can be adopted for the synthesis of Internal Model Control (IMC) architectures. To this end, first a gated recurrent network is used to learn a model of the unknown input-output stable plant. Then, a controller gated recurrent network is trained to approximate the model inverse. The stability of these networks, ensured by means of a suitable training procedure, allows to guarantee the input-output closed-loop stability. The proposed scheme is able to cope with the saturation of the control variables, and can be deployed on low-power embedded controllers, as it requires limited online computations. The approach is then tested on the Quadruple Tank benchmark system and compared to alternative control laws, resulting in remarkable closed-loop performances.</description><identifier>EISSN: 2331-8422</identifier><identifier>DOI: 10.48550/arxiv.2108.04585</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Computer Science - Learning ; Computer Science - Systems and Control ; Neural networks ; Nonlinear control ; Nonlinear systems ; Recurrent neural networks</subject><ispartof>arXiv.org, 2022-03</ispartof><rights>2022. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,784,885,27925</link.rule.ids><backlink>$$Uhttps://doi.org/10.48550/arXiv.2108.04585$$DView paper in arXiv$$Hfree_for_read</backlink><backlink>$$Uhttps://doi.org/10.1016/j.ejcon.2022.100632$$DView published paper (Access to full text may be restricted)$$Hfree_for_read</backlink></links><search><creatorcontrib>Bonassi, Fabio</creatorcontrib><creatorcontrib>Scattolini, Riccardo</creatorcontrib><title>Recurrent Neural Network-based Internal Model Control design for stable nonlinear systems</title><title>arXiv.org</title><description>Owing to their superior modeling capabilities, gated Recurrent Neural Networks, such as Gated Recurrent Units (GRUs) and Long Short-Term Memory networks (LSTMs), have become popular tools for learning dynamical systems. This paper aims to discuss how these networks can be adopted for the synthesis of Internal Model Control (IMC) architectures. To this end, first a gated recurrent network is used to learn a model of the unknown input-output stable plant. Then, a controller gated recurrent network is trained to approximate the model inverse. The stability of these networks, ensured by means of a suitable training procedure, allows to guarantee the input-output closed-loop stability. The proposed scheme is able to cope with the saturation of the control variables, and can be deployed on low-power embedded controllers, as it requires limited online computations. The approach is then tested on the Quadruple Tank benchmark system and compared to alternative control laws, resulting in remarkable closed-loop performances.</description><subject>Computer Science - Learning</subject><subject>Computer Science - Systems and Control</subject><subject>Neural networks</subject><subject>Nonlinear control</subject><subject>Nonlinear systems</subject><subject>Recurrent neural networks</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GOX</sourceid><recordid>eNotj01LAzEYhIMgWGp_gCcDnre--dqNR1n8KFQF6cXTkmQT2bpNapJV---7bb3MwDAM8yB0RWDOpRBwq-Jf9zOnBOQcuJDiDE0oY6SQnNILNEtpDQC0rKgQbII-3q0ZYrQ-41c7RNWPln9D_Cq0SrbFC59t9GP8Elrb4zr4HEOPW5u6T49diDhlpXuLffB9560ag13KdpMu0blTfbKzf5-i1ePDqn4ulm9Pi_p-WShBeeHoeMWAExyMhKo1io6qS2JkSZzkChjjlXG6UrQkljDOtWLurjVUaM0cm6Lr0-yRu9nGbqPirjnwN0f-sXFzamxj-B5sys06DAem1FBRAimZBM72jHBedA</recordid><startdate>20220317</startdate><enddate>20220317</enddate><creator>Bonassi, Fabio</creator><creator>Scattolini, Riccardo</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20220317</creationdate><title>Recurrent Neural Network-based Internal Model Control design for stable nonlinear systems</title><author>Bonassi, Fabio ; Scattolini, Riccardo</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a524-f2267c0f540c807dca207db61c861f84a03347cfb7a261e1344ba3f9dc25bb3f3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Computer Science - Learning</topic><topic>Computer Science - Systems and Control</topic><topic>Neural networks</topic><topic>Nonlinear control</topic><topic>Nonlinear systems</topic><topic>Recurrent neural networks</topic><toplevel>online_resources</toplevel><creatorcontrib>Bonassi, Fabio</creatorcontrib><creatorcontrib>Scattolini, Riccardo</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Access via ProQuest (Open Access)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection><collection>arXiv Computer Science</collection><collection>arXiv.org</collection><jtitle>arXiv.org</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Bonassi, Fabio</au><au>Scattolini, Riccardo</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Recurrent Neural Network-based Internal Model Control design for stable nonlinear systems</atitle><jtitle>arXiv.org</jtitle><date>2022-03-17</date><risdate>2022</risdate><eissn>2331-8422</eissn><abstract>Owing to their superior modeling capabilities, gated Recurrent Neural Networks, such as Gated Recurrent Units (GRUs) and Long Short-Term Memory networks (LSTMs), have become popular tools for learning dynamical systems. This paper aims to discuss how these networks can be adopted for the synthesis of Internal Model Control (IMC) architectures. To this end, first a gated recurrent network is used to learn a model of the unknown input-output stable plant. Then, a controller gated recurrent network is trained to approximate the model inverse. The stability of these networks, ensured by means of a suitable training procedure, allows to guarantee the input-output closed-loop stability. The proposed scheme is able to cope with the saturation of the control variables, and can be deployed on low-power embedded controllers, as it requires limited online computations. The approach is then tested on the Quadruple Tank benchmark system and compared to alternative control laws, resulting in remarkable closed-loop performances.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><doi>10.48550/arxiv.2108.04585</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2022-03 |
issn | 2331-8422 |
language | eng |
recordid | cdi_arxiv_primary_2108_04585 |
source | arXiv.org; Free E- Journals |
subjects | Computer Science - Learning Computer Science - Systems and Control Neural networks Nonlinear control Nonlinear systems Recurrent neural networks |
title | Recurrent Neural Network-based Internal Model Control design for stable nonlinear systems |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-20T12%3A02%3A54IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_arxiv&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Recurrent%20Neural%20Network-based%20Internal%20Model%20Control%20design%20for%20stable%20nonlinear%20systems&rft.jtitle=arXiv.org&rft.au=Bonassi,%20Fabio&rft.date=2022-03-17&rft.eissn=2331-8422&rft_id=info:doi/10.48550/arxiv.2108.04585&rft_dat=%3Cproquest_arxiv%3E2560163804%3C/proquest_arxiv%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2560163804&rft_id=info:pmid/&rfr_iscdi=true |