Efficient Neural Network-Based Estimation of Interval Shapley Values

The use of Shapley Values (SVs) to explain machine learning model predictions is established. Recent research efforts have been devoted to generating efficient Neural Network-based SVs estimates. However, the variability of the generated estimates, which depend on the selected data sampling, model,...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on knowledge and data engineering 2024-12, Vol.36 (12), p.8108-8119
Hauptverfasser: Napolitano, Davide, Vaiani, Lorenzo, Cagliero, Luca
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:The use of Shapley Values (SVs) to explain machine learning model predictions is established. Recent research efforts have been devoted to generating efficient Neural Network-based SVs estimates. However, the variability of the generated estimates, which depend on the selected data sampling, model, and training parameters, brings the reliability of such estimates into question. By leveraging the concept of Interval SVs, we propose to incorporate SVs uncertainty directly into the learning process. Specifically, we explain ensemble models composed of multiple predictors, each one generating potentially different outcomes. Unlike all existing approaches, the explainer design is tailored to Interval SVs learning instead of SVs only. We present three new Network-based explainers relying on different ISV paradigms, i.e., a Multi-Task Learning network inspired by the Shapley value's weighted least squares characterization and two Interval Shapley-Like Value Neural estimators. The experiments thoroughly evaluate the new approaches on ten benchmark datasets, looking for the best compromise between intervals' accuracy and explainers' efficiency.
ISSN:1041-4347
1558-2191
DOI:10.1109/TKDE.2024.3420180