Captum: A unified and generic model interpretability library for PyTorch
In this paper we introduce a novel, unified, open-source model interpretability library for PyTorch [12]. The library contains generic implementations of a number of gradient and perturbation-based attribution algorithms, also known as feature, neuron and layer importance algorithms, as well as a se...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In this paper we introduce a novel, unified, open-source model
interpretability library for PyTorch [12]. The library contains generic
implementations of a number of gradient and perturbation-based attribution
algorithms, also known as feature, neuron and layer importance algorithms, as
well as a set of evaluation metrics for these algorithms. It can be used for
both classification and non-classification models including graph-structured
models built on Neural Networks (NN). In this paper we give a high-level
overview of supported attribution algorithms and show how to perform
memory-efficient and scalable computations. We emphasize that the three main
characteristics of the library are multimodality, extensibility and ease of
use. Multimodality supports different modality of inputs such as image, text,
audio or video. Extensibility allows adding new algorithms and features. The
library is also designed for easy understanding and use. Besides, we also
introduce an interactive visualization tool called Captum Insights that is
built on top of Captum library and allows sample-based model debugging and
visualization using feature importance metrics. |
---|---|
DOI: | 10.48550/arxiv.2009.07896 |