Interpretability, personalization and reliability of a machine learning based clinical decision support system
Artificial intelligence (AI) has achieved notable performances in many fields and its research impact in healthcare has been unquestionable. Nevertheless, the deployment of such computational models in clinical practice is still limited. Some of the major issues recognized as barriers to a successfu...
Gespeichert in:
Veröffentlicht in: | Data mining and knowledge discovery 2022-05, Vol.36 (3), p.1140-1173 |
---|---|
Hauptverfasser: | , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Artificial intelligence (AI) has achieved notable performances in many fields and its research impact in healthcare has been unquestionable. Nevertheless, the deployment of such computational models in clinical practice is still limited. Some of the major issues recognized as barriers to a successful real-world machine learning applications include lack of: transparency; reliability and personalization. Actually, these aspects are decisive not only for patient safety, but also to assure the confidence of professionals. Explainable AI aims at to achieve solutions for artificial intelligence transparency and reliability concerns, with the capacity to better understand and trust a model, providing the ability to justify its outcomes, thus effectively assisting clinicians in rationalizing the model prediction. This work proposes an innovative machine learning based approach, implementing a hybrid scheme, able to combine in a systematic way knowledge-driven and data-driven techniques. In a first step a global set of interpretable rules is generated, founded on clinical evidence. Then, in a second phase, a machine learning model is trained to select, from the global set of rules, the subset that is more appropriate for a given patient, according to his particular characteristics. This approach addresses simultaneously three of the central requirements of explainable AI—interpretability, personalization, and reliability—without impairing the accuracy of the model’s prediction. The scheme was validated with a real dataset provided by two Portuguese Hospitals, the Santa Cruz Hospital, Lisbon, and the Santo André Hospital, Leiria, comprising a total of N = 1111 patients that suffered an acute coronary syndrome event, where the 30 days mortality was assessed. When compared with standard black-box structures (e.g. feedforward neural network) the proposed scheme achieves similar performances, while ensures simultaneously clinical interpretability and personalization of the model, as well as provides a level of reliability to the estimated mortality risk. |
---|---|
ISSN: | 1384-5810 1573-756X |
DOI: | 10.1007/s10618-022-00821-8 |