Approximative Uncertainty in Neural Network Predictions
Suppose data-driven black-box models, e.g., neural networks, should be used as components in safety-critical systems such as autonomous vehicles. In that case, knowing how uncertain they are in their predictions is crucial. However, this needs to be provided for standard formulations of neural netwo...
Gespeichert in:
1. Verfasser: | |
---|---|
Format: | Dissertation |
Sprache: | eng |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Suppose data-driven black-box models, e.g., neural networks, should be used as components in safety-critical systems such as autonomous vehicles. In that case, knowing how uncertain they are in their predictions is crucial. However, this needs to be provided for standard formulations of neural networks. Hence, this thesis aims to develop a method that can, out-of-the-box, extend the standard formulations to include uncertainty in the prediction. The proposed method in the thesis is based on a local linear approximation, using a two-step linearization to quantify the uncertainty in the prediction from the neural network. First, the posterior distribution of the neural network parameters is approximated using a Gaussian distribution. The mean of the distribution is at the maximum a posteriori estimate of the parameters, and the covariance is estimated using the shape of the likelihood function in the vicinity of the estimated parameters. The second linearization is used to propagate the uncertainty in the parameters to uncertainty in the model’s output. Hence, to create a linear approximation of the nonlinear model that a neural network is.
The first part of the thesis considers regression problems with examples of road-friction experiments using simulated and experimentally collected data. For the model-order selection problem, it is shown that the method does not under-estimate the uncertainty in the prediction of overparametrized models.
The second part of the thesis considers classification problems. The concept of calibration of the uncertainty, i.e., how reliable the uncertainty is and how close it resembles the true uncertainty, is considered. The proposed method is shown to create calibrated estimates of the uncertainty, evaluated on classical image data sets. From a computational perspective, the thesis proposes a recursive update of the parameter covariance, enhancing the method’s viability. Furthermore, it shows how quantified uncertainty can improve the robustness of a decision process by formulating an information fusion scheme that includes both temporal correlational and correlation between classifiers. Moreover, having access to a measure of uncertainty in the prediction is essential when detecting outliers in the data, i.e., examples that the neural network has yet to see during the training. On this task, the proposed method shows promising results. Finally, the thesis proposes an extension that enables a multimodal representation of the |
---|---|
DOI: | 10.3384/9789180754064 |