Learning safety in model-based Reinforcement Learning using MPC and Gaussian Processes
We propose a method to encourage safety in Model Predictive Control (MPC)-based Reinforcement Learning (RL) via Gaussian Process (GP) regression. This framework consists of 1) a parametric MPC scheme that is employed as model-based controller with approximate knowledge on the real system's dyna...
Gespeichert in:
Veröffentlicht in: | arXiv.org 2023-03 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Airaldi, Filippo De Schutter, Bart Dabiri, Azita |
description | We propose a method to encourage safety in Model Predictive Control (MPC)-based Reinforcement Learning (RL) via Gaussian Process (GP) regression. This framework consists of 1) a parametric MPC scheme that is employed as model-based controller with approximate knowledge on the real system's dynamics, 2) an episodic RL algorithm tasked with adjusting the MPC parametrization in order to increase its performance, and lastly, 3) GP regressors used to estimate, directly from data, constraints on the MPC parameters capable of predicting, up to some probability, whether the parametrization is likely to yield a safe or unsafe policy. These constraints are then enforced onto the RL updates in an effort to enhance the learning method with a probabilistic safety mechanism. Compared to other recent publications combining safe RL with MPC, our method does not require further assumptions on, e.g., the prediction model in order to retain computational tractability. We illustrate the results of our method in a numerical example on the control of a quadrotor drone in a safety-critical environment. |
doi_str_mv | 10.48550/arxiv.2211.01860 |
format | Article |
fullrecord | <record><control><sourceid>proquest_arxiv</sourceid><recordid>TN_cdi_arxiv_primary_2211_01860</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2731910278</sourcerecordid><originalsourceid>FETCH-LOGICAL-a950-d7a0bd6f502ea5bd9dc05cca7613da793139702c090674fd58588cd05071c2523</originalsourceid><addsrcrecordid>eNo9j01LAzEURYMgWGp_gCsDrqe-vEwmmaUUrcKIRYrb4U2SkSltpiYdsf_efoibezeHyz2M3QiY5kYpuKf4031PEYWYgjAFXLARSikykyNesUlKKwDAQqNScsQ-Kk8xdOGTJ2r9bs-7wDe98-usoeQdf_ddaPto_caHHf-Hh3TM18WMU3B8TkNKHQW-iL31Kfl0zS5bWic_-esxWz49LmfPWfU2f5k9VBmVCjKnCRpXtArQk2pc6Swoa0kXQjrSpRSy1IAWSih03jpllDHWgQItLCqUY3Z7nj1J19vYbSju66N8fZI_EHdnYhv7r8GnXb3qhxgOn2rUUpQCUBv5C4wxW_s</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2731910278</pqid></control><display><type>article</type><title>Learning safety in model-based Reinforcement Learning using MPC and Gaussian Processes</title><source>arXiv.org</source><source>Free E- Journals</source><creator>Airaldi, Filippo ; De Schutter, Bart ; Dabiri, Azita</creator><creatorcontrib>Airaldi, Filippo ; De Schutter, Bart ; Dabiri, Azita</creatorcontrib><description>We propose a method to encourage safety in Model Predictive Control (MPC)-based Reinforcement Learning (RL) via Gaussian Process (GP) regression. This framework consists of 1) a parametric MPC scheme that is employed as model-based controller with approximate knowledge on the real system's dynamics, 2) an episodic RL algorithm tasked with adjusting the MPC parametrization in order to increase its performance, and lastly, 3) GP regressors used to estimate, directly from data, constraints on the MPC parameters capable of predicting, up to some probability, whether the parametrization is likely to yield a safe or unsafe policy. These constraints are then enforced onto the RL updates in an effort to enhance the learning method with a probabilistic safety mechanism. Compared to other recent publications combining safe RL with MPC, our method does not require further assumptions on, e.g., the prediction model in order to retain computational tractability. We illustrate the results of our method in a numerical example on the control of a quadrotor drone in a safety-critical environment.</description><identifier>EISSN: 2331-8422</identifier><identifier>DOI: 10.48550/arxiv.2211.01860</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Algorithms ; Computer Science - Systems and Control ; Gaussian process ; Learning ; Parameterization ; Prediction models ; Predictive control ; Safety critical ; Statistical analysis</subject><ispartof>arXiv.org, 2023-03</ispartof><rights>2023. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,784,885,27925</link.rule.ids><backlink>$$Uhttps://doi.org/10.48550/arXiv.2211.01860$$DView paper in arXiv$$Hfree_for_read</backlink><backlink>$$Uhttps://doi.org/10.1016/j.ifacol.2023.10.563$$DView published paper (Access to full text may be restricted)$$Hfree_for_read</backlink></links><search><creatorcontrib>Airaldi, Filippo</creatorcontrib><creatorcontrib>De Schutter, Bart</creatorcontrib><creatorcontrib>Dabiri, Azita</creatorcontrib><title>Learning safety in model-based Reinforcement Learning using MPC and Gaussian Processes</title><title>arXiv.org</title><description>We propose a method to encourage safety in Model Predictive Control (MPC)-based Reinforcement Learning (RL) via Gaussian Process (GP) regression. This framework consists of 1) a parametric MPC scheme that is employed as model-based controller with approximate knowledge on the real system's dynamics, 2) an episodic RL algorithm tasked with adjusting the MPC parametrization in order to increase its performance, and lastly, 3) GP regressors used to estimate, directly from data, constraints on the MPC parameters capable of predicting, up to some probability, whether the parametrization is likely to yield a safe or unsafe policy. These constraints are then enforced onto the RL updates in an effort to enhance the learning method with a probabilistic safety mechanism. Compared to other recent publications combining safe RL with MPC, our method does not require further assumptions on, e.g., the prediction model in order to retain computational tractability. We illustrate the results of our method in a numerical example on the control of a quadrotor drone in a safety-critical environment.</description><subject>Algorithms</subject><subject>Computer Science - Systems and Control</subject><subject>Gaussian process</subject><subject>Learning</subject><subject>Parameterization</subject><subject>Prediction models</subject><subject>Predictive control</subject><subject>Safety critical</subject><subject>Statistical analysis</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GOX</sourceid><recordid>eNo9j01LAzEURYMgWGp_gCsDrqe-vEwmmaUUrcKIRYrb4U2SkSltpiYdsf_efoibezeHyz2M3QiY5kYpuKf4031PEYWYgjAFXLARSikykyNesUlKKwDAQqNScsQ-Kk8xdOGTJ2r9bs-7wDe98-usoeQdf_ddaPto_caHHf-Hh3TM18WMU3B8TkNKHQW-iL31Kfl0zS5bWic_-esxWz49LmfPWfU2f5k9VBmVCjKnCRpXtArQk2pc6Swoa0kXQjrSpRSy1IAWSih03jpllDHWgQItLCqUY3Z7nj1J19vYbSju66N8fZI_EHdnYhv7r8GnXb3qhxgOn2rUUpQCUBv5C4wxW_s</recordid><startdate>20230317</startdate><enddate>20230317</enddate><creator>Airaldi, Filippo</creator><creator>De Schutter, Bart</creator><creator>Dabiri, Azita</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20230317</creationdate><title>Learning safety in model-based Reinforcement Learning using MPC and Gaussian Processes</title><author>Airaldi, Filippo ; De Schutter, Bart ; Dabiri, Azita</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a950-d7a0bd6f502ea5bd9dc05cca7613da793139702c090674fd58588cd05071c2523</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Algorithms</topic><topic>Computer Science - Systems and Control</topic><topic>Gaussian process</topic><topic>Learning</topic><topic>Parameterization</topic><topic>Prediction models</topic><topic>Predictive control</topic><topic>Safety critical</topic><topic>Statistical analysis</topic><toplevel>online_resources</toplevel><creatorcontrib>Airaldi, Filippo</creatorcontrib><creatorcontrib>De Schutter, Bart</creatorcontrib><creatorcontrib>Dabiri, Azita</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection><collection>arXiv Computer Science</collection><collection>arXiv.org</collection><jtitle>arXiv.org</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Airaldi, Filippo</au><au>De Schutter, Bart</au><au>Dabiri, Azita</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Learning safety in model-based Reinforcement Learning using MPC and Gaussian Processes</atitle><jtitle>arXiv.org</jtitle><date>2023-03-17</date><risdate>2023</risdate><eissn>2331-8422</eissn><abstract>We propose a method to encourage safety in Model Predictive Control (MPC)-based Reinforcement Learning (RL) via Gaussian Process (GP) regression. This framework consists of 1) a parametric MPC scheme that is employed as model-based controller with approximate knowledge on the real system's dynamics, 2) an episodic RL algorithm tasked with adjusting the MPC parametrization in order to increase its performance, and lastly, 3) GP regressors used to estimate, directly from data, constraints on the MPC parameters capable of predicting, up to some probability, whether the parametrization is likely to yield a safe or unsafe policy. These constraints are then enforced onto the RL updates in an effort to enhance the learning method with a probabilistic safety mechanism. Compared to other recent publications combining safe RL with MPC, our method does not require further assumptions on, e.g., the prediction model in order to retain computational tractability. We illustrate the results of our method in a numerical example on the control of a quadrotor drone in a safety-critical environment.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><doi>10.48550/arxiv.2211.01860</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2023-03 |
issn | 2331-8422 |
language | eng |
recordid | cdi_arxiv_primary_2211_01860 |
source | arXiv.org; Free E- Journals |
subjects | Algorithms Computer Science - Systems and Control Gaussian process Learning Parameterization Prediction models Predictive control Safety critical Statistical analysis |
title | Learning safety in model-based Reinforcement Learning using MPC and Gaussian Processes |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-18T18%3A49%3A54IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_arxiv&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Learning%20safety%20in%20model-based%20Reinforcement%20Learning%20using%20MPC%20and%20Gaussian%20Processes&rft.jtitle=arXiv.org&rft.au=Airaldi,%20Filippo&rft.date=2023-03-17&rft.eissn=2331-8422&rft_id=info:doi/10.48550/arxiv.2211.01860&rft_dat=%3Cproquest_arxiv%3E2731910278%3C/proquest_arxiv%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2731910278&rft_id=info:pmid/&rfr_iscdi=true |