Learning-based counterfactual explanations for recommendation

Counterfactual explanations provide explanations by exploring the changes in effect caused by changes in cause. They have attracted significant attention in recommender system research to explore the impact of changes in certain properties on the recommendation mechanism. Among several counterfactua...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Science China. Information sciences 2024-08, Vol.67 (8), p.182102, Article 182102
Hauptverfasser: Wen, Jingxuan, Liu, Huafeng, Jing, Liping, Yu, Jian
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Counterfactual explanations provide explanations by exploring the changes in effect caused by changes in cause. They have attracted significant attention in recommender system research to explore the impact of changes in certain properties on the recommendation mechanism. Among several counterfactual recommendation methods, item-based counterfactual explanation methods have attracted considerable attention because of their flexibility. The core idea of item-based counterfactual explanation methods is to find a minimal subset of interacted items (i.e., short length) such that the recommended item would topple out of the top- K recommendation list once these items have been removed from user interactions (i.e., good quality). Usually, explanations are generated by ranking the precomputed importance of items, which fails to characterize the true importance of interacted items due to separation from the explanation generation. Additionally, the final explanations are generated according to a certain search strategy given the precomputed importance. This indicates that the quality and length of counterfactual explanations are deterministic; therefore, they cannot be balanced once the search strategy is fixed. To overcome these obstacles, this study proposes learning-based counterfactual explanations for recommendation (LCER) to provide counterfactual explanations based on personalized recommendations by jointly modeling the factual and counterfactual preference. To achieve consistency between the computation of importance and generation of counterfactual explanations, the proposed LCER endows an optimizable importance for each interacted item, which is supervised by the goal of counterfactual explanations to guarantee its credibility. Because of the model’s flexibility, the trade-off between quality and length can be customized by setting different proportions. The experimental results on four real-world datasets demonstrate the effectiveness of the proposed LCER over several state-of-the-art baselines, both quantitatively and qualitatively.
ISSN:1674-733X
1869-1919
DOI:10.1007/s11432-023-3974-2