Diagrammatization: Rationalizing with diagrammatic AI explanations for abductive-deductive reasoning on hypotheses
Many visualizations have been developed for explainable AI (XAI), but they often require further reasoning by users to interpret. We argue that XAI should support diagrammatic and abductive reasoning for the AI to perform hypothesis generation and evaluation to reduce the interpretability gap. We pr...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Many visualizations have been developed for explainable AI (XAI), but they
often require further reasoning by users to interpret. We argue that XAI should
support diagrammatic and abductive reasoning for the AI to perform hypothesis
generation and evaluation to reduce the interpretability gap. We propose
Diagrammatization to i) perform Peircean abductive-deductive reasoning, ii)
follow domain conventions, and iii) explain with diagrams visually or verbally.
We implemented DiagramNet for a clinical application to predict cardiac
diagnoses from heart auscultation, and explain with shape-based murmur
diagrams. In modeling studies, we found that DiagramNet not only provides
faithful murmur shape explanations, but also has better prediction performance
than baseline models. We further demonstrate the interpretability and
trustworthiness of diagrammatic explanations in a qualitative user study with
medical students, showing that clinically-relevant, diagrammatic explanations
are preferred over technical saliency map explanations. This work contributes
insights into providing domain-conventional abductive explanations for
user-centric XAI. |
---|---|
DOI: | 10.48550/arxiv.2302.01241 |