Tactical Decision-Making in Autonomous Driving by Reinforcement Learning with Uncertainty Estimation
IEEE Intelligent Vehicles Symposium (IV), 2020, pp. 1292-1298 Reinforcement learning (RL) can be used to create a tactical decision-making agent for autonomous driving. However, previous approaches only output decisions and do not provide information about the agent's confidence in the recommen...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | IEEE Intelligent Vehicles Symposium (IV), 2020, pp. 1292-1298 Reinforcement learning (RL) can be used to create a tactical decision-making
agent for autonomous driving. However, previous approaches only output
decisions and do not provide information about the agent's confidence in the
recommended actions. This paper investigates how a Bayesian RL technique, based
on an ensemble of neural networks with additional randomized prior functions
(RPF), can be used to estimate the uncertainty of decisions in autonomous
driving. A method for classifying whether or not an action should be considered
safe is also introduced. The performance of the ensemble RPF method is
evaluated by training an agent on a highway driving scenario. It is shown that
the trained agent can estimate the uncertainty of its decisions and indicate an
unacceptable level when the agent faces a situation that is far from the
training distribution. Furthermore, within the training distribution, the
ensemble RPF agent outperforms a standard Deep Q-Network agent. In this study,
the estimated uncertainty is used to choose safe actions in unknown situations.
However, the uncertainty information could also be used to identify situations
that should be added to the training process. |
---|---|
DOI: | 10.48550/arxiv.2004.10439 |