Explainability and causability for artificial intelligence-supported medical image analysis in the context of the European In Vitro Diagnostic Regulation

Artificial Intelligence (AI) for the biomedical domain is gaining significant interest and holds considerable potential for the future of healthcare, particularly also in the context of in vitro diagnostics. The European In Vitro Diagnostic Medical Device Regulation (IVDR) explicitly includes softwa...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:New biotechnology 2022-09, Vol.70, p.67-72
Hauptverfasser: Müller, Heimo, Holzinger, Andreas, Plass, Markus, Brcic, Luka, Stumptner, Cornelia, Zatloukal, Kurt
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Artificial Intelligence (AI) for the biomedical domain is gaining significant interest and holds considerable potential for the future of healthcare, particularly also in the context of in vitro diagnostics. The European In Vitro Diagnostic Medical Device Regulation (IVDR) explicitly includes software in its requirements. This poses major challenges for In Vitro Diagnostic devices (IVDs) that involve Machine Learning (ML) algorithms for data analysis and decision support. This can increase the difficulty of applying some of the most successful ML and Deep Learning (DL) methods to the biomedical domain, just by missing the required explanatory components from the manufacturers. In this context, trustworthy AI has to empower biomedical professionals to take responsibility for their decision-making, which clearly raises the need for explainable AI methods. Explainable AI, such as layer-wise relevance propagation, can help in highlighting the relevant parts of inputs to, and representations in, a neural network that caused a result and visualize these relevant parts. In the same way that usability encompasses measurements for the quality of use, the concept of causability encompasses measurements for the quality of explanations produced by explainable AI methods. This paper describes both concepts and gives examples of how explainability and causability are essential in order to demonstrate scientific validity as well as analytical and clinical performance for future AI-based IVDs. •Causability has to be shown for features recognized by In Vitro Diagnostic devices.•Analytical and clinical performance of AI algorithms must be monitored.•Explainability and causability have to be demonstrated for performance monitoring.•Experts need evidence of explainability/causability in order to take responsibility.•AI needs causability measures to ensure the quality of explainability.
ISSN:1871-6784
1876-4347
DOI:10.1016/j.nbt.2022.05.002