A Perspective on Explanations of Molecular Prediction Models

Chemists can be skeptical in using deep learning (DL) in decision making, due to the lack of interpretability in “black-box” models. Explainable artificial intelligence (XAI) is a branch of artificial intelligence (AI) which addresses this drawback by providing tools to interpret DL models and their...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Journal of chemical theory and computation 2023-04, Vol.19 (8), p.2149-2160
Hauptverfasser: Wellawatte, Geemi P., Gandhi, Heta A., Seshadri, Aditi, White, Andrew D.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Chemists can be skeptical in using deep learning (DL) in decision making, due to the lack of interpretability in “black-box” models. Explainable artificial intelligence (XAI) is a branch of artificial intelligence (AI) which addresses this drawback by providing tools to interpret DL models and their predictions. We review the principles of XAI in the domain of chemistry and emerging methods for creating and evaluating explanations. Then, we focus on methods developed by our group and their applications in predicting solubility, blood–brain barrier permeability, and the scent of molecules. We show that XAI methods like chemical counterfactuals and descriptor explanations can explain DL predictions while giving insight into structure–property relationships. Finally, we discuss how a two-step process of developing a black-box model and explaining predictions can uncover structure–property relationships.
ISSN:1549-9618
1549-9626
1549-9626
DOI:10.1021/acs.jctc.2c01235