Benchmarking saliency methods for chest X-ray interpretation

Saliency methods, which produce heat maps that highlight the areas of the medical image that influence model prediction, are often presented to clinicians as an aid in diagnostic decision-making. However, rigorous investigation of the accuracy and reliability of these strategies is necessary before...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Nature machine intelligence 2022-10, Vol.4 (10), p.867-878
Hauptverfasser: Saporta, Adriel, Gui, Xiaotong, Agrawal, Ashwin, Pareek, Anuj, Truong, Steven Q. H., Nguyen, Chanh D. T., Ngo, Van-Doan, Seekins, Jayne, Blankenberg, Francis G., Ng, Andrew Y., Lungren, Matthew P., Rajpurkar, Pranav
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Saliency methods, which produce heat maps that highlight the areas of the medical image that influence model prediction, are often presented to clinicians as an aid in diagnostic decision-making. However, rigorous investigation of the accuracy and reliability of these strategies is necessary before they are integrated into the clinical setting. In this work, we quantitatively evaluate seven saliency methods, including Grad-CAM, across multiple neural network architectures using two evaluation metrics. We establish the first human benchmark for chest X-ray segmentation in a multilabel classification set-up, and examine under what clinical conditions saliency maps might be more prone to failure in localizing important pathologies compared with a human expert benchmark. We find that (1) while Grad-CAM generally localized pathologies better than the other evaluated saliency methods, all seven performed significantly worse compared with the human benchmark, (2) the gap in localization performance between Grad-CAM and the human benchmark was largest for pathologies that were smaller in size and had shapes that were more complex, and (3) model confidence was positively correlated with Grad-CAM localization performance. Our work demonstrates that several important limitations of saliency methods must be addressed before we can rely on them for deep learning explainability in medical imaging. Saliency methods are used to localize areas of medical images that influence machine learning model predictions, but their accuracy and reliability require investigation. Saporta and colleagues evaluate seven saliency methods using different model architectures, and find that saliency maps perform worse than a human radiologist benchmark.
ISSN:2522-5839
2522-5839
DOI:10.1038/s42256-022-00536-x