Fast Explanation Using Shapley Value for Object Detection
In explainable artificial intelligence (XAI) for object detection, saliency maps are employed to highlight important regions for a learned model's prediction. However, there is a trade-off where the more accurate the explanation results, the higher the computational cost becomes, posing a chall...
Gespeichert in:
Veröffentlicht in: | IEEE access 2024-01, Vol.12, p.1-1 |
---|---|
Hauptverfasser: | , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In explainable artificial intelligence (XAI) for object detection, saliency maps are employed to highlight important regions for a learned model's prediction. However, there is a trade-off where the more accurate the explanation results, the higher the computational cost becomes, posing a challenge for practical applications. Therefore, this paper proposes a novel XAI method for object detection that addresses this challenge. In recent years, research on XAI that satisfies desirable properties for explanatory validity by introducing the Shapley value has been widely conducted. However, a common drawback across these approaches is the high computational cost, which has hindered broad implementation. Our proposed method utilizes an explainer model that learns to estimate the Shapley value and provides a reliable explanation for object detection in a real-time inference. This framework can be applied to various object detectors in a model-agnostic manner. We experimentally demonstrate in quantitative evaluation that our method achieves the fastest explanation while delivering superior performance compared to other existing methods. |
---|---|
ISSN: | 2169-3536 2169-3536 |
DOI: | 10.1109/ACCESS.2024.3369890 |