Evaluation of Multi-task Uncertainties in Joint Semantic Segmentation and Monocular Depth Estimation
While a number of promising uncertainty quantification methods have been proposed to address the prevailing shortcomings of deep neural networks like overconfidence and lack of explainability, quantifying predictive uncertainties in the context of joint semantic segmentation and monocular depth esti...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | While a number of promising uncertainty quantification methods have been
proposed to address the prevailing shortcomings of deep neural networks like
overconfidence and lack of explainability, quantifying predictive uncertainties
in the context of joint semantic segmentation and monocular depth estimation
has not been explored yet. Since many real-world applications are multi-modal
in nature and, hence, have the potential to benefit from multi-task learning,
this is a substantial gap in current literature. To this end, we conduct a
comprehensive series of experiments to study how multi-task learning influences
the quality of uncertainty estimates in comparison to solving both tasks
separately. |
---|---|
DOI: | 10.48550/arxiv.2405.17097 |