Transparency of deep neural networks for medical image analysis: A review of interpretability methods

Artificial Intelligence (AI) has emerged as a useful aid in numerous clinical applications for diagnosis and treatment decisions. Deep neural networks have shown the same or better performance than clinicians in many tasks owing to the rapid increase in the available data and computational power. In...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Computers in biology and medicine 2022-01, Vol.140, p.105111-105111, Article 105111
Hauptverfasser: Salahuddin, Zohaib, Woodruff, Henry C., Chatterjee, Avishek, Lambin, Philippe
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Artificial Intelligence (AI) has emerged as a useful aid in numerous clinical applications for diagnosis and treatment decisions. Deep neural networks have shown the same or better performance than clinicians in many tasks owing to the rapid increase in the available data and computational power. In order to conform to the principles of trustworthy AI, it is essential that the AI system be transparent, robust, fair, and ensure accountability. Current deep neural solutions are referred to as black-boxes due to a lack of understanding of the specifics concerning the decision-making process. Therefore, there is a need to ensure the interpretability of deep neural networks before they can be incorporated into the routine clinical workflow. In this narrative review, we utilized systematic keyword searches and domain expertise to identify nine different types of interpretability methods that have been used for understanding deep learning models for medical image analysis applications based on the type of generated explanations and technical similarities. Furthermore, we report the progress made towards evaluating the explanations produced by various interpretability methods. Finally, we discuss limitations, provide guidelines for using interpretability methods and future directions concerning the interpretability of deep neural networks for medical imaging analysis. •Interpretability of deep neural networks is important for fostering clinical trust and for troubleshooting systems.•Interpretability methods for medical image analysis tasks can be classified into nine different types.•Evaluation of interpretability methods in a clinical setting is important.•Quantitative and qualitative evaluation of post-hoc explanations is important to determine their sanity.•Interpretability methods can help in discovering new imaging biomarkers.
ISSN:0010-4825
1879-0534
DOI:10.1016/j.compbiomed.2021.105111