XC: Exploring Quantitative Use Cases for Explanations in 3D Object Detection
Explainable AI (XAI) methods are frequently applied to obtain qualitative insights about deep models' predictions. However, such insights need to be interpreted by a human observer to be useful. In this paper, we aim to use explanations directly to make decisions without human observers. We ado...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Explainable AI (XAI) methods are frequently applied to obtain qualitative
insights about deep models' predictions. However, such insights need to be
interpreted by a human observer to be useful. In this paper, we aim to use
explanations directly to make decisions without human observers. We adopt two
gradient-based explanation methods, Integrated Gradients (IG) and backprop, for
the task of 3D object detection. Then, we propose a set of quantitative
measures, named Explanation Concentration (XC) scores, that can be used for
downstream tasks. These scores quantify the concentration of attributions
within the boundaries of detected objects. We evaluate the effectiveness of XC
scores via the task of distinguishing true positive (TP) and false positive
(FP) detected objects in the KITTI and Waymo datasets. The results demonstrate
an improvement of more than 100\% on both datasets compared to other heuristics
such as random guesses and the number of LiDAR points in the bounding box,
raising confidence in XC's potential for application in more use cases. Our
results also indicate that computationally expensive XAI methods like IG may
not be more valuable when used quantitatively compare to simpler methods. |
---|---|
DOI: | 10.48550/arxiv.2210.11590 |