DUE: Dynamic Uncertainty-Aware Explanation Supervision via 3D Imputation
Explanation supervision aims to enhance deep learning models by integrating additional signals to guide the generation of model explanations, showcasing notable improvements in both the predictability and explainability of the model. However, the application of explanation supervision to higher-dime...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Explanation supervision aims to enhance deep learning models by integrating
additional signals to guide the generation of model explanations, showcasing
notable improvements in both the predictability and explainability of the
model. However, the application of explanation supervision to
higher-dimensional data, such as 3D medical images, remains an under-explored
domain. Challenges associated with supervising visual explanations in the
presence of an additional dimension include: 1) spatial correlation changed, 2)
lack of direct 3D annotations, and 3) uncertainty varies across different parts
of the explanation. To address these challenges, we propose a Dynamic
Uncertainty-aware Explanation supervision (DUE) framework for 3D explanation
supervision that ensures uncertainty-aware explanation guidance when dealing
with sparsely annotated 3D data with diffusion-based 3D interpolation. Our
proposed framework is validated through comprehensive experiments on diverse
real-world medical imaging datasets. The results demonstrate the effectiveness
of our framework in enhancing the predictability and explainability of deep
learning models in the context of medical imaging diagnosis applications. |
---|---|
DOI: | 10.48550/arxiv.2403.10831 |