audioLIME: Listenable Explanations Using Source Separation
Deep neural networks (DNNs) are successfully applied in a wide variety of music information retrieval (MIR) tasks but their predictions are usually not interpretable. We propose audioLIME, a method based on Local Interpretable Model-agnostic Explanations (LIME) extended by a musical definition of lo...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Deep neural networks (DNNs) are successfully applied in a wide variety of
music information retrieval (MIR) tasks but their predictions are usually not
interpretable. We propose audioLIME, a method based on Local Interpretable
Model-agnostic Explanations (LIME) extended by a musical definition of
locality. The perturbations used in LIME are created by switching on/off
components extracted by source separation which makes our explanations
listenable. We validate audioLIME on two different music tagging systems and
show that it produces sensible explanations in situations where a competing
method cannot. |
---|---|
DOI: | 10.48550/arxiv.2008.00582 |