An explainable AI approach for diagnosis of COVID-19 using MALDI-ToF mass spectrometry

Current artificial intelligence (AI) applications for the diagnosis of coronavirus disease 2019 (COVID-19) often lack a biological foundation in the decision-making process. In this study, we have employed AI for COVID-19 diagnosis using mass spectrometry (MS) data and leveraged explainable AI (X-AI...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Expert systems with applications 2024-02, Vol.236, p.121226, Article 121226
Hauptverfasser: Seethi, Venkata Devesh Reddy, LaCasse, Zane, Chivte, Prajkta, Bland, Joshua, Kadkol, Shrihari S., Gaillard, Elizabeth R., Bharti, Pratool, Alhoori, Hamed
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Current artificial intelligence (AI) applications for the diagnosis of coronavirus disease 2019 (COVID-19) often lack a biological foundation in the decision-making process. In this study, we have employed AI for COVID-19 diagnosis using mass spectrometry (MS) data and leveraged explainable AI (X-AI) to explain the decision-making process on a local (per-sample) and global (all samples) basis. We first assessed eight machine learning models with five feature engineering techniques using a five-fold stratified cross-validation. The best accuracy was achieved by Random Forest (RF) classifier using the ratio of areas under the curve (AUC) from the MS data as features. These features were chosen on the basis of tentatively representing both human and viral proteins in human gargle samples. We evaluated the RF classifier on a 70%−30% train-test split strategy of 152 human gargle samples, yielding an accuracy of 94.12% on the test dataset. Employing X-AI, we further interpreted the RF model using shapely additive explanations (SHAP) and feature importance techniques, including permutation and impurity-based feature importances. With these interpretation models offering a local and global explanation for the machine learning model decisions, we devised a straightforward, three-stage X-AI framework that can enable medical practitioners to understand the mechanisms of a black-box AI model. To the medical practitioner, this instills trust in the AI model by providing the rationales for its decisions. •Trained an AI model with human oral gargle sample data analyzed by MALDI MS method.•Achieved a high accuracy in COVID-19 diagnosis.•Incorporated X-AI to explain the outcomes of AI models from different perspectives.
ISSN:0957-4174
1873-6793
DOI:10.1016/j.eswa.2023.121226