CLEAR: Causal Explanations from Attention in Neural Recommenders
We present CLEAR, a method for learning session-specific causal graphs, in the possible presence of latent confounders, from attention in pre-trained attention-based recommenders. These causal graphs describe user behavior, within the context captured by attention, and can provide a counterfactual e...
Gespeichert in:
Veröffentlicht in: | arXiv.org 2022-10 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We present CLEAR, a method for learning session-specific causal graphs, in the possible presence of latent confounders, from attention in pre-trained attention-based recommenders. These causal graphs describe user behavior, within the context captured by attention, and can provide a counterfactual explanation for a recommendation. In essence, these causal graphs allow answering "why" questions uniquely for any specific session. Using empirical evaluations we show that, compared to naively using attention weights to explain input-output relations, counterfactual explanations found by CLEAR are shorter and an alternative recommendation is ranked higher in the original top-k recommendations. |
---|---|
ISSN: | 2331-8422 |