Visual Explanations with Attributions and Counterfactuals on Time Series Classification
With the rising necessity of explainable artificial intelligence (XAI), we see an increase in task-dependent XAI methods on varying abstraction levels. XAI techniques on a global level explain model behavior and on a local level explain sample predictions. We propose a visual analytics workflow to s...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | With the rising necessity of explainable artificial intelligence (XAI), we
see an increase in task-dependent XAI methods on varying abstraction levels.
XAI techniques on a global level explain model behavior and on a local level
explain sample predictions. We propose a visual analytics workflow to support
seamless transitions between global and local explanations, focusing on
attributions and counterfactuals on time series classification. In particular,
we adapt local XAI techniques (attributions) that are developed for traditional
datasets (images, text) to analyze time series classification, a data type that
is typically less intelligible to humans. To generate a global overview, we
apply local attribution methods to the data, creating explanations for the
whole dataset. These explanations are projected onto two dimensions, depicting
model behavior trends, strategies, and decision boundaries. To further inspect
the model decision-making as well as potential data errors, a what-if analysis
facilitates hypothesis generation and verification on both the global and local
levels. We constantly collected and incorporated expert user feedback, as well
as insights based on their domain knowledge, resulting in a tailored analysis
workflow and system that tightly integrates time series transformations into
explanations. Lastly, we present three use cases, verifying that our technique
enables users to (1)~explore data transformations and feature relevance,
(2)~identify model behavior and decision boundaries, as well as, (3)~the reason
for misclassifications. |
---|---|
DOI: | 10.48550/arxiv.2307.08494 |