Proximal Policy Optimization for Tracking Control Exploiting Future Reference Information

In recent years, reinforcement learning (RL) has gained increasing attention in control engineering. Especially, policy gradient methods are widely used. In this work, we improve the tracking performance of proximal policy optimization (PPO) for arbitrary reference signals by incorporating informati...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Mayer, Jana, Westermann, Johannes, Muriedas, Juan Pedro Gutiérrez H, Mettin, Uwe, Lampe, Alexander
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Mayer, Jana
Westermann, Johannes
Muriedas, Juan Pedro Gutiérrez H
Mettin, Uwe
Lampe, Alexander
description In recent years, reinforcement learning (RL) has gained increasing attention in control engineering. Especially, policy gradient methods are widely used. In this work, we improve the tracking performance of proximal policy optimization (PPO) for arbitrary reference signals by incorporating information about future reference values. Two variants of extending the argument of the actor and the critic taking future reference values into account are presented. In the first variant, global future reference values are added to the argument. For the second variant, a novel kind of residual space with future reference values applicable to model-free reinforcement learning is introduced. Our approach is evaluated against a PI controller on a simple drive train model. We expect our method to generalize to arbitrary references better than previous approaches, pointing towards the applicability of RL to control real systems.
doi_str_mv 10.48550/arxiv.2107.09647
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2107_09647</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2107_09647</sourcerecordid><originalsourceid>FETCH-LOGICAL-a677-cb0c859e6c0f9c0d9e8862d22575a1ceb17baa3c1911354a92e63e5353ca9c43</originalsourceid><addsrcrecordid>eNotz1FLwzAUBeC8-CDTH-CT-QOtSdMkzaOUTQeDDd2LT-X27laCbVNiJ52_3q3u6cCBc-Bj7EGKNC-0Fk8QJ_-TZlLYVDiT21v2sYth8h20fBdajye-HUbf-V8Yfeh5EyLfR8Av33_yMvRjDC1fTkMb_HipVsfxGIm_UUOReiS-7s-Tbh7fsZsG2m-6v-aCva-W-_I12Wxf1uXzJgFjbYK1wEI7Migah-LgqChMdsgybTVIpFraGkChdFIqnYPLyCjSSisEh7lasMf_15lWDfFsiafqQqxmovoD0wpNow</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Proximal Policy Optimization for Tracking Control Exploiting Future Reference Information</title><source>arXiv.org</source><creator>Mayer, Jana ; Westermann, Johannes ; Muriedas, Juan Pedro Gutiérrez H ; Mettin, Uwe ; Lampe, Alexander</creator><creatorcontrib>Mayer, Jana ; Westermann, Johannes ; Muriedas, Juan Pedro Gutiérrez H ; Mettin, Uwe ; Lampe, Alexander</creatorcontrib><description>In recent years, reinforcement learning (RL) has gained increasing attention in control engineering. Especially, policy gradient methods are widely used. In this work, we improve the tracking performance of proximal policy optimization (PPO) for arbitrary reference signals by incorporating information about future reference values. Two variants of extending the argument of the actor and the critic taking future reference values into account are presented. In the first variant, global future reference values are added to the argument. For the second variant, a novel kind of residual space with future reference values applicable to model-free reinforcement learning is introduced. Our approach is evaluated against a PI controller on a simple drive train model. We expect our method to generalize to arbitrary references better than previous approaches, pointing towards the applicability of RL to control real systems.</description><identifier>DOI: 10.48550/arxiv.2107.09647</identifier><language>eng</language><subject>Computer Science - Learning ; Computer Science - Robotics ; Computer Science - Systems and Control</subject><creationdate>2021-07</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2107.09647$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2107.09647$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Mayer, Jana</creatorcontrib><creatorcontrib>Westermann, Johannes</creatorcontrib><creatorcontrib>Muriedas, Juan Pedro Gutiérrez H</creatorcontrib><creatorcontrib>Mettin, Uwe</creatorcontrib><creatorcontrib>Lampe, Alexander</creatorcontrib><title>Proximal Policy Optimization for Tracking Control Exploiting Future Reference Information</title><description>In recent years, reinforcement learning (RL) has gained increasing attention in control engineering. Especially, policy gradient methods are widely used. In this work, we improve the tracking performance of proximal policy optimization (PPO) for arbitrary reference signals by incorporating information about future reference values. Two variants of extending the argument of the actor and the critic taking future reference values into account are presented. In the first variant, global future reference values are added to the argument. For the second variant, a novel kind of residual space with future reference values applicable to model-free reinforcement learning is introduced. Our approach is evaluated against a PI controller on a simple drive train model. We expect our method to generalize to arbitrary references better than previous approaches, pointing towards the applicability of RL to control real systems.</description><subject>Computer Science - Learning</subject><subject>Computer Science - Robotics</subject><subject>Computer Science - Systems and Control</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotz1FLwzAUBeC8-CDTH-CT-QOtSdMkzaOUTQeDDd2LT-X27laCbVNiJ52_3q3u6cCBc-Bj7EGKNC-0Fk8QJ_-TZlLYVDiT21v2sYth8h20fBdajye-HUbf-V8Yfeh5EyLfR8Av33_yMvRjDC1fTkMb_HipVsfxGIm_UUOReiS-7s-Tbh7fsZsG2m-6v-aCva-W-_I12Wxf1uXzJgFjbYK1wEI7Migah-LgqChMdsgybTVIpFraGkChdFIqnYPLyCjSSisEh7lasMf_15lWDfFsiafqQqxmovoD0wpNow</recordid><startdate>20210720</startdate><enddate>20210720</enddate><creator>Mayer, Jana</creator><creator>Westermann, Johannes</creator><creator>Muriedas, Juan Pedro Gutiérrez H</creator><creator>Mettin, Uwe</creator><creator>Lampe, Alexander</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20210720</creationdate><title>Proximal Policy Optimization for Tracking Control Exploiting Future Reference Information</title><author>Mayer, Jana ; Westermann, Johannes ; Muriedas, Juan Pedro Gutiérrez H ; Mettin, Uwe ; Lampe, Alexander</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a677-cb0c859e6c0f9c0d9e8862d22575a1ceb17baa3c1911354a92e63e5353ca9c43</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Computer Science - Learning</topic><topic>Computer Science - Robotics</topic><topic>Computer Science - Systems and Control</topic><toplevel>online_resources</toplevel><creatorcontrib>Mayer, Jana</creatorcontrib><creatorcontrib>Westermann, Johannes</creatorcontrib><creatorcontrib>Muriedas, Juan Pedro Gutiérrez H</creatorcontrib><creatorcontrib>Mettin, Uwe</creatorcontrib><creatorcontrib>Lampe, Alexander</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Mayer, Jana</au><au>Westermann, Johannes</au><au>Muriedas, Juan Pedro Gutiérrez H</au><au>Mettin, Uwe</au><au>Lampe, Alexander</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Proximal Policy Optimization for Tracking Control Exploiting Future Reference Information</atitle><date>2021-07-20</date><risdate>2021</risdate><abstract>In recent years, reinforcement learning (RL) has gained increasing attention in control engineering. Especially, policy gradient methods are widely used. In this work, we improve the tracking performance of proximal policy optimization (PPO) for arbitrary reference signals by incorporating information about future reference values. Two variants of extending the argument of the actor and the critic taking future reference values into account are presented. In the first variant, global future reference values are added to the argument. For the second variant, a novel kind of residual space with future reference values applicable to model-free reinforcement learning is introduced. Our approach is evaluated against a PI controller on a simple drive train model. We expect our method to generalize to arbitrary references better than previous approaches, pointing towards the applicability of RL to control real systems.</abstract><doi>10.48550/arxiv.2107.09647</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2107.09647
ispartof
issn
language eng
recordid cdi_arxiv_primary_2107_09647
source arXiv.org
subjects Computer Science - Learning
Computer Science - Robotics
Computer Science - Systems and Control
title Proximal Policy Optimization for Tracking Control Exploiting Future Reference Information
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-25T20%3A55%3A12IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Proximal%20Policy%20Optimization%20for%20Tracking%20Control%20Exploiting%20Future%20Reference%20Information&rft.au=Mayer,%20Jana&rft.date=2021-07-20&rft_id=info:doi/10.48550/arxiv.2107.09647&rft_dat=%3Carxiv_GOX%3E2107_09647%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true