Interpretable Outcome Prediction with Sparse Bayesian Neural Networks in Intensive Care
Clinical decision making is challenging because of pathological complexity, as well as large amounts of heterogeneous data generated as part of routine clinical care. In recent years, machine learning tools have been developed to aid this process. Intensive care unit (ICU) admissions represent the m...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Clinical decision making is challenging because of pathological complexity,
as well as large amounts of heterogeneous data generated as part of routine
clinical care. In recent years, machine learning tools have been developed to
aid this process. Intensive care unit (ICU) admissions represent the most data
dense and time-critical patient care episodes. In this context, prediction
models may help clinicians determine which patients are most at risk and
prioritize care. However, flexible tools such as artificial neural networks
(ANNs) suffer from a lack of interpretability limiting their acceptability to
clinicians. In this work, we propose a novel interpretable Bayesian neural
network architecture which offers both the flexibility of ANNs and
interpretability in terms of feature selection. In particular, we employ a
sparsity inducing prior distribution in a tied manner to learn which features
are important for outcome prediction. We evaluate our approach on the task of
mortality prediction using two real-world ICU cohorts. In collaboration with
clinicians we found that, in addition to the predicted outcome results, our
approach can provide novel insights into the importance of different clinical
measurements. This suggests that our model can support medical experts in their
decision making process. |
---|---|
DOI: | 10.48550/arxiv.1905.02599 |