Multi-bus signal priority control method based on deep reinforcement learning in edge computing environment
The invention discloses a multi-bus signal priority control method based on deep reinforcement learning in an edge computing environment. An urban road network is partitioned by adopting a spectral clustering algorithm (GASC) based on a genetic algorithm, and the same multi-bus signal priority contr...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Patent |
Sprache: | chi ; eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The invention discloses a multi-bus signal priority control method based on deep reinforcement learning in an edge computing environment. An urban road network is partitioned by adopting a spectral clustering algorithm (GASC) based on a genetic algorithm, and the same multi-bus signal priority control model is implemented in each subarea, so that the effect of regional bus signal priority control is achieved. Wherein a distributed competition depth Q network algorithm (DPDDQN) based on experience priority playback is adopted to train a multi-bus signal priority control model, so that the multi-bus signal priority control model can process multi-bus priority requests, delay and traffic congestion of other social vehicles are reduced, and the passing efficiency of the vehicles is maximized.
本发明公开了一种边缘计算环境下基于深度强化学习的多公交信号优先控制方法。采用基于遗传算法的谱聚类算法(GASC)对城市路网进行分区,各个子区内实施相同的多公交信号优先控制模型,进而达到区域公交信号优先控制的效果。其中,采用基于经验优先回放的分布式竞争深度Q网络算法(DPDDQN)对多公交信号优先控制模型进行训练,使其能够在处理多公交优先请求的同时,减少其他社会车辆的延误和交通拥堵,最大化车辆的通行效率。 |
---|