A Novel Framework Combining MPC and Deep Reinforcement Learning With Application to Freeway Traffic Control

Model predictive control (MPC) and deep reinforcement learning (DRL) have been developed extensively as two independent techniques for traffic management. Although the features of MPC and DRL complement each other very well, few of the current studies consider combining these two methods for applica...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on intelligent transportation systems 2024-07, Vol.25 (7), p.6756-6769
Hauptverfasser: Sun, Dingshan, Jamshidnejad, Anahita, De Schutter, Bart
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 6769
container_issue 7
container_start_page 6756
container_title IEEE transactions on intelligent transportation systems
container_volume 25
creator Sun, Dingshan
Jamshidnejad, Anahita
De Schutter, Bart
description Model predictive control (MPC) and deep reinforcement learning (DRL) have been developed extensively as two independent techniques for traffic management. Although the features of MPC and DRL complement each other very well, few of the current studies consider combining these two methods for application in the field of freeway traffic control. This paper proposes a novel framework for integrating MPC and DRL methods for freeway traffic control that is different from existing MPC-(D)RL methods. Specifically, the proposed framework adopts a hierarchical structure, where a high-level efficient MPC component works at a low frequency to provide a baseline control input, while the DRL component works at a high frequency to modify online the output generated by MPC. The control framework, therefore, needs only limited online computational resources and is able to handle uncertainties and external disturbances after proper learning with enough training data. The proposed framework is implemented on a benchmark freeway network in order to coordinate ramp metering and variable speed limits, and the performance is compared with standard MPC and DRL approaches. The simulation results show that the proposed framework outperforms standalone MPC and DRL methods in terms of total time spent (TTS) and constraint satisfaction, despite model uncertainties and external disturbances.
doi_str_mv 10.1109/TITS.2023.3342651
format Article
fullrecord <record><control><sourceid>crossref_ieee_</sourceid><recordid>TN_cdi_ieee_primary_10379486</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10379486</ieee_id><sourcerecordid>10_1109_TITS_2023_3342651</sourcerecordid><originalsourceid>FETCH-LOGICAL-c261t-c1b070a31f629e48c849c92668b2efc484ceb237788555feee9b43752feec7fc3</originalsourceid><addsrcrecordid>eNpNkMtOwzAQRS0EEqXwAUgs_AMpfibOsgoUKoWHIIhl5JgxmCZx5ERU_XsS2gWruRrdexYHoUtKFpSS9LpYF68LRhhfcC5YLOkRmlEpVUQIjY-nzESUEklO0Vnff49fISmdoc0SP_ofqPEq6Aa2Pmxw5pvKta79xA_PGdbtB74B6PALuNb6YKCBdsA56PDXeXfDF152Xe2MHpxv8eBHFsBW73ARtLXOjMR2CL4-RydW1z1cHO4cva1ui-w-yp_u1tkyjwyL6RAZWpGEaE5tzFIQyiiRmpTFsaoYWCOUMFAxniRKSSktAKSV4IlkYzSJNXyO6J5rgu_7ALbsgmt02JWUlJOtcrJVTrbKg61xc7XfuJH3r8-TVKiY_wLE12dI</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>A Novel Framework Combining MPC and Deep Reinforcement Learning With Application to Freeway Traffic Control</title><source>IEEE Electronic Library (IEL)</source><creator>Sun, Dingshan ; Jamshidnejad, Anahita ; De Schutter, Bart</creator><creatorcontrib>Sun, Dingshan ; Jamshidnejad, Anahita ; De Schutter, Bart</creatorcontrib><description>Model predictive control (MPC) and deep reinforcement learning (DRL) have been developed extensively as two independent techniques for traffic management. Although the features of MPC and DRL complement each other very well, few of the current studies consider combining these two methods for application in the field of freeway traffic control. This paper proposes a novel framework for integrating MPC and DRL methods for freeway traffic control that is different from existing MPC-(D)RL methods. Specifically, the proposed framework adopts a hierarchical structure, where a high-level efficient MPC component works at a low frequency to provide a baseline control input, while the DRL component works at a high frequency to modify online the output generated by MPC. The control framework, therefore, needs only limited online computational resources and is able to handle uncertainties and external disturbances after proper learning with enough training data. The proposed framework is implemented on a benchmark freeway network in order to coordinate ramp metering and variable speed limits, and the performance is compared with standard MPC and DRL approaches. The simulation results show that the proposed framework outperforms standalone MPC and DRL methods in terms of total time spent (TTS) and constraint satisfaction, despite model uncertainties and external disturbances.</description><identifier>ISSN: 1524-9050</identifier><identifier>EISSN: 1558-0016</identifier><identifier>DOI: 10.1109/TITS.2023.3342651</identifier><identifier>CODEN: ITISFG</identifier><language>eng</language><publisher>IEEE</publisher><subject>deep reinforcement learning ; Freeway network management ; hierarchical structure ; Mathematical models ; model predictive control ; Optimization ; Reinforcement learning ; Safety ; Sun ; Traffic control ; Uncertainty</subject><ispartof>IEEE transactions on intelligent transportation systems, 2024-07, Vol.25 (7), p.6756-6769</ispartof><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c261t-c1b070a31f629e48c849c92668b2efc484ceb237788555feee9b43752feec7fc3</cites><orcidid>0000-0001-9867-6196 ; 0000-0001-9994-8181 ; 0000-0001-9151-2607</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10379486$$EHTML$$P50$$Gieee$$Hfree_for_read</linktohtml><link.rule.ids>314,780,784,796,27924,27925,54758</link.rule.ids></links><search><creatorcontrib>Sun, Dingshan</creatorcontrib><creatorcontrib>Jamshidnejad, Anahita</creatorcontrib><creatorcontrib>De Schutter, Bart</creatorcontrib><title>A Novel Framework Combining MPC and Deep Reinforcement Learning With Application to Freeway Traffic Control</title><title>IEEE transactions on intelligent transportation systems</title><addtitle>TITS</addtitle><description>Model predictive control (MPC) and deep reinforcement learning (DRL) have been developed extensively as two independent techniques for traffic management. Although the features of MPC and DRL complement each other very well, few of the current studies consider combining these two methods for application in the field of freeway traffic control. This paper proposes a novel framework for integrating MPC and DRL methods for freeway traffic control that is different from existing MPC-(D)RL methods. Specifically, the proposed framework adopts a hierarchical structure, where a high-level efficient MPC component works at a low frequency to provide a baseline control input, while the DRL component works at a high frequency to modify online the output generated by MPC. The control framework, therefore, needs only limited online computational resources and is able to handle uncertainties and external disturbances after proper learning with enough training data. The proposed framework is implemented on a benchmark freeway network in order to coordinate ramp metering and variable speed limits, and the performance is compared with standard MPC and DRL approaches. The simulation results show that the proposed framework outperforms standalone MPC and DRL methods in terms of total time spent (TTS) and constraint satisfaction, despite model uncertainties and external disturbances.</description><subject>deep reinforcement learning</subject><subject>Freeway network management</subject><subject>hierarchical structure</subject><subject>Mathematical models</subject><subject>model predictive control</subject><subject>Optimization</subject><subject>Reinforcement learning</subject><subject>Safety</subject><subject>Sun</subject><subject>Traffic control</subject><subject>Uncertainty</subject><issn>1524-9050</issn><issn>1558-0016</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>ESBDL</sourceid><sourceid>RIE</sourceid><recordid>eNpNkMtOwzAQRS0EEqXwAUgs_AMpfibOsgoUKoWHIIhl5JgxmCZx5ERU_XsS2gWruRrdexYHoUtKFpSS9LpYF68LRhhfcC5YLOkRmlEpVUQIjY-nzESUEklO0Vnff49fISmdoc0SP_ofqPEq6Aa2Pmxw5pvKta79xA_PGdbtB74B6PALuNb6YKCBdsA56PDXeXfDF152Xe2MHpxv8eBHFsBW73ARtLXOjMR2CL4-RydW1z1cHO4cva1ui-w-yp_u1tkyjwyL6RAZWpGEaE5tzFIQyiiRmpTFsaoYWCOUMFAxniRKSSktAKSV4IlkYzSJNXyO6J5rgu_7ALbsgmt02JWUlJOtcrJVTrbKg61xc7XfuJH3r8-TVKiY_wLE12dI</recordid><startdate>20240701</startdate><enddate>20240701</enddate><creator>Sun, Dingshan</creator><creator>Jamshidnejad, Anahita</creator><creator>De Schutter, Bart</creator><general>IEEE</general><scope>97E</scope><scope>ESBDL</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><orcidid>https://orcid.org/0000-0001-9867-6196</orcidid><orcidid>https://orcid.org/0000-0001-9994-8181</orcidid><orcidid>https://orcid.org/0000-0001-9151-2607</orcidid></search><sort><creationdate>20240701</creationdate><title>A Novel Framework Combining MPC and Deep Reinforcement Learning With Application to Freeway Traffic Control</title><author>Sun, Dingshan ; Jamshidnejad, Anahita ; De Schutter, Bart</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c261t-c1b070a31f629e48c849c92668b2efc484ceb237788555feee9b43752feec7fc3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>deep reinforcement learning</topic><topic>Freeway network management</topic><topic>hierarchical structure</topic><topic>Mathematical models</topic><topic>model predictive control</topic><topic>Optimization</topic><topic>Reinforcement learning</topic><topic>Safety</topic><topic>Sun</topic><topic>Traffic control</topic><topic>Uncertainty</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Sun, Dingshan</creatorcontrib><creatorcontrib>Jamshidnejad, Anahita</creatorcontrib><creatorcontrib>De Schutter, Bart</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE Open Access Journals</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><jtitle>IEEE transactions on intelligent transportation systems</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Sun, Dingshan</au><au>Jamshidnejad, Anahita</au><au>De Schutter, Bart</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>A Novel Framework Combining MPC and Deep Reinforcement Learning With Application to Freeway Traffic Control</atitle><jtitle>IEEE transactions on intelligent transportation systems</jtitle><stitle>TITS</stitle><date>2024-07-01</date><risdate>2024</risdate><volume>25</volume><issue>7</issue><spage>6756</spage><epage>6769</epage><pages>6756-6769</pages><issn>1524-9050</issn><eissn>1558-0016</eissn><coden>ITISFG</coden><abstract>Model predictive control (MPC) and deep reinforcement learning (DRL) have been developed extensively as two independent techniques for traffic management. Although the features of MPC and DRL complement each other very well, few of the current studies consider combining these two methods for application in the field of freeway traffic control. This paper proposes a novel framework for integrating MPC and DRL methods for freeway traffic control that is different from existing MPC-(D)RL methods. Specifically, the proposed framework adopts a hierarchical structure, where a high-level efficient MPC component works at a low frequency to provide a baseline control input, while the DRL component works at a high frequency to modify online the output generated by MPC. The control framework, therefore, needs only limited online computational resources and is able to handle uncertainties and external disturbances after proper learning with enough training data. The proposed framework is implemented on a benchmark freeway network in order to coordinate ramp metering and variable speed limits, and the performance is compared with standard MPC and DRL approaches. The simulation results show that the proposed framework outperforms standalone MPC and DRL methods in terms of total time spent (TTS) and constraint satisfaction, despite model uncertainties and external disturbances.</abstract><pub>IEEE</pub><doi>10.1109/TITS.2023.3342651</doi><tpages>14</tpages><orcidid>https://orcid.org/0000-0001-9867-6196</orcidid><orcidid>https://orcid.org/0000-0001-9994-8181</orcidid><orcidid>https://orcid.org/0000-0001-9151-2607</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 1524-9050
ispartof IEEE transactions on intelligent transportation systems, 2024-07, Vol.25 (7), p.6756-6769
issn 1524-9050
1558-0016
language eng
recordid cdi_ieee_primary_10379486
source IEEE Electronic Library (IEL)
subjects deep reinforcement learning
Freeway network management
hierarchical structure
Mathematical models
model predictive control
Optimization
Reinforcement learning
Safety
Sun
Traffic control
Uncertainty
title A Novel Framework Combining MPC and Deep Reinforcement Learning With Application to Freeway Traffic Control
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-18T22%3A48%3A23IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-crossref_ieee_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=A%20Novel%20Framework%20Combining%20MPC%20and%20Deep%20Reinforcement%20Learning%20With%20Application%20to%20Freeway%20Traffic%20Control&rft.jtitle=IEEE%20transactions%20on%20intelligent%20transportation%20systems&rft.au=Sun,%20Dingshan&rft.date=2024-07-01&rft.volume=25&rft.issue=7&rft.spage=6756&rft.epage=6769&rft.pages=6756-6769&rft.issn=1524-9050&rft.eissn=1558-0016&rft.coden=ITISFG&rft_id=info:doi/10.1109/TITS.2023.3342651&rft_dat=%3Ccrossref_ieee_%3E10_1109_TITS_2023_3342651%3C/crossref_ieee_%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=10379486&rfr_iscdi=true