Deep Reinforcement Learning-Based Traffic Light Scheduling Framework for SDN-Enabled Smart Transportation System
This work proposes a traffic-light scheduling framework using the deep reinforcement learning technique to balance the traffic flow and to prevent congestion in the dense regions of the city via a software-defined control interface. A software-defined control enabled architecture is proposed to moni...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on intelligent transportation systems 2022-03, Vol.23 (3), p.2411-2421 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | This work proposes a traffic-light scheduling framework using the deep reinforcement learning technique to balance the traffic flow and to prevent congestion in the dense regions of the city via a software-defined control interface. A software-defined control enabled architecture is proposed to monitor the traffic conditions and it generates the traffic light control signal (Red/Yellow/Green) accordingly. For an intelligent traffic light control signal, a Deep Reinforcement Learning (DRL) model is proposed which takes vehicular dynamics as inputs from the real-time traffic environment such as heterogeneous vehicles count, speed, traffic density etc. To determine the congestion, a threshold policy is proposed and deployed on control server which generates the congestion prevention signal. A DRL agent operates in the coordination of congestion prevention signal and generates an effective traffic light control signal. The proposed model is evaluated through a realistic simulation on Indian city OpenStreetMap by using a well-known open-source simulator (SUMO). The comparative results show that the proposed solution improves several performance metrics such as average waiting time, throughput, average queue length, and average speed in the interval of 28.34% - 66.62%, 24.76% - 66.60%, 30.89% - 69.80%, and 16.62% - 43.67% respectively over other states of the art approaches. |
---|---|
ISSN: | 1524-9050 1558-0016 |
DOI: | 10.1109/TITS.2021.3095161 |