Interpretable Outcome Prediction with Sparse Bayesian Neural Networks in Intensive Care

Clinical decision making is challenging because of pathological complexity, as well as large amounts of heterogeneous data generated as part of routine clinical care. In recent years, machine learning tools have been developed to aid this process. Intensive care unit (ICU) admissions represent the m...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Overweg, Hiske, Popkes, Anna-Lena, Ercole, Ari, Li, Yingzhen, Hernández-Lobato, José Miguel, Zaykov, Yordan, Zhang, Cheng
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Overweg, Hiske
Popkes, Anna-Lena
Ercole, Ari
Li, Yingzhen
Hernández-Lobato, José Miguel
Zaykov, Yordan
Zhang, Cheng
description Clinical decision making is challenging because of pathological complexity, as well as large amounts of heterogeneous data generated as part of routine clinical care. In recent years, machine learning tools have been developed to aid this process. Intensive care unit (ICU) admissions represent the most data dense and time-critical patient care episodes. In this context, prediction models may help clinicians determine which patients are most at risk and prioritize care. However, flexible tools such as artificial neural networks (ANNs) suffer from a lack of interpretability limiting their acceptability to clinicians. In this work, we propose a novel interpretable Bayesian neural network architecture which offers both the flexibility of ANNs and interpretability in terms of feature selection. In particular, we employ a sparsity inducing prior distribution in a tied manner to learn which features are important for outcome prediction. We evaluate our approach on the task of mortality prediction using two real-world ICU cohorts. In collaboration with clinicians we found that, in addition to the predicted outcome results, our approach can provide novel insights into the importance of different clinical measurements. This suggests that our model can support medical experts in their decision making process.
doi_str_mv 10.48550/arxiv.1905.02599
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_1905_02599</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>1905_02599</sourcerecordid><originalsourceid>FETCH-LOGICAL-a679-4400bd39bd96e87de5b66b89f90c0ead62d3972ad10b58c9feb70e8a6da4c7183</originalsourceid><addsrcrecordid>eNotj8lOwzAUAH3hgAofwAn_QIKzeDtCxFKpapGo1GP0bL-oFqkTOW5L_562cJrDSCMNIQ8Fy2vFOXuC-OMPeaEZz1nJtb4lm3lIGMeICUyPdLVPdtgh_YzovE1-CPTo05Z-jRAnpC9wwslDoEvcR-jPSMchfk_UB3oJhckfkDYQ8Y7cdNBPeP_PGVm_va6bj2yxep83z4sMhNRZXTNmXKWN0wKVdMiNEEbpTjPLEJwoz1KW4ApmuLK6QyMZKhAOaisLVc3I41_2etaO0e8gntrLYXs9rH4BMSRNNA</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Interpretable Outcome Prediction with Sparse Bayesian Neural Networks in Intensive Care</title><source>arXiv.org</source><creator>Overweg, Hiske ; Popkes, Anna-Lena ; Ercole, Ari ; Li, Yingzhen ; Hernández-Lobato, José Miguel ; Zaykov, Yordan ; Zhang, Cheng</creator><creatorcontrib>Overweg, Hiske ; Popkes, Anna-Lena ; Ercole, Ari ; Li, Yingzhen ; Hernández-Lobato, José Miguel ; Zaykov, Yordan ; Zhang, Cheng</creatorcontrib><description>Clinical decision making is challenging because of pathological complexity, as well as large amounts of heterogeneous data generated as part of routine clinical care. In recent years, machine learning tools have been developed to aid this process. Intensive care unit (ICU) admissions represent the most data dense and time-critical patient care episodes. In this context, prediction models may help clinicians determine which patients are most at risk and prioritize care. However, flexible tools such as artificial neural networks (ANNs) suffer from a lack of interpretability limiting their acceptability to clinicians. In this work, we propose a novel interpretable Bayesian neural network architecture which offers both the flexibility of ANNs and interpretability in terms of feature selection. In particular, we employ a sparsity inducing prior distribution in a tied manner to learn which features are important for outcome prediction. We evaluate our approach on the task of mortality prediction using two real-world ICU cohorts. In collaboration with clinicians we found that, in addition to the predicted outcome results, our approach can provide novel insights into the importance of different clinical measurements. This suggests that our model can support medical experts in their decision making process.</description><identifier>DOI: 10.48550/arxiv.1905.02599</identifier><language>eng</language><subject>Computer Science - Learning ; Statistics - Machine Learning</subject><creationdate>2019-05</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/1905.02599$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.1905.02599$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Overweg, Hiske</creatorcontrib><creatorcontrib>Popkes, Anna-Lena</creatorcontrib><creatorcontrib>Ercole, Ari</creatorcontrib><creatorcontrib>Li, Yingzhen</creatorcontrib><creatorcontrib>Hernández-Lobato, José Miguel</creatorcontrib><creatorcontrib>Zaykov, Yordan</creatorcontrib><creatorcontrib>Zhang, Cheng</creatorcontrib><title>Interpretable Outcome Prediction with Sparse Bayesian Neural Networks in Intensive Care</title><description>Clinical decision making is challenging because of pathological complexity, as well as large amounts of heterogeneous data generated as part of routine clinical care. In recent years, machine learning tools have been developed to aid this process. Intensive care unit (ICU) admissions represent the most data dense and time-critical patient care episodes. In this context, prediction models may help clinicians determine which patients are most at risk and prioritize care. However, flexible tools such as artificial neural networks (ANNs) suffer from a lack of interpretability limiting their acceptability to clinicians. In this work, we propose a novel interpretable Bayesian neural network architecture which offers both the flexibility of ANNs and interpretability in terms of feature selection. In particular, we employ a sparsity inducing prior distribution in a tied manner to learn which features are important for outcome prediction. We evaluate our approach on the task of mortality prediction using two real-world ICU cohorts. In collaboration with clinicians we found that, in addition to the predicted outcome results, our approach can provide novel insights into the importance of different clinical measurements. This suggests that our model can support medical experts in their decision making process.</description><subject>Computer Science - Learning</subject><subject>Statistics - Machine Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2019</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj8lOwzAUAH3hgAofwAn_QIKzeDtCxFKpapGo1GP0bL-oFqkTOW5L_562cJrDSCMNIQ8Fy2vFOXuC-OMPeaEZz1nJtb4lm3lIGMeICUyPdLVPdtgh_YzovE1-CPTo05Z-jRAnpC9wwslDoEvcR-jPSMchfk_UB3oJhckfkDYQ8Y7cdNBPeP_PGVm_va6bj2yxep83z4sMhNRZXTNmXKWN0wKVdMiNEEbpTjPLEJwoz1KW4ApmuLK6QyMZKhAOaisLVc3I41_2etaO0e8gntrLYXs9rH4BMSRNNA</recordid><startdate>20190507</startdate><enddate>20190507</enddate><creator>Overweg, Hiske</creator><creator>Popkes, Anna-Lena</creator><creator>Ercole, Ari</creator><creator>Li, Yingzhen</creator><creator>Hernández-Lobato, José Miguel</creator><creator>Zaykov, Yordan</creator><creator>Zhang, Cheng</creator><scope>AKY</scope><scope>EPD</scope><scope>GOX</scope></search><sort><creationdate>20190507</creationdate><title>Interpretable Outcome Prediction with Sparse Bayesian Neural Networks in Intensive Care</title><author>Overweg, Hiske ; Popkes, Anna-Lena ; Ercole, Ari ; Li, Yingzhen ; Hernández-Lobato, José Miguel ; Zaykov, Yordan ; Zhang, Cheng</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a679-4400bd39bd96e87de5b66b89f90c0ead62d3972ad10b58c9feb70e8a6da4c7183</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2019</creationdate><topic>Computer Science - Learning</topic><topic>Statistics - Machine Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Overweg, Hiske</creatorcontrib><creatorcontrib>Popkes, Anna-Lena</creatorcontrib><creatorcontrib>Ercole, Ari</creatorcontrib><creatorcontrib>Li, Yingzhen</creatorcontrib><creatorcontrib>Hernández-Lobato, José Miguel</creatorcontrib><creatorcontrib>Zaykov, Yordan</creatorcontrib><creatorcontrib>Zhang, Cheng</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv Statistics</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Overweg, Hiske</au><au>Popkes, Anna-Lena</au><au>Ercole, Ari</au><au>Li, Yingzhen</au><au>Hernández-Lobato, José Miguel</au><au>Zaykov, Yordan</au><au>Zhang, Cheng</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Interpretable Outcome Prediction with Sparse Bayesian Neural Networks in Intensive Care</atitle><date>2019-05-07</date><risdate>2019</risdate><abstract>Clinical decision making is challenging because of pathological complexity, as well as large amounts of heterogeneous data generated as part of routine clinical care. In recent years, machine learning tools have been developed to aid this process. Intensive care unit (ICU) admissions represent the most data dense and time-critical patient care episodes. In this context, prediction models may help clinicians determine which patients are most at risk and prioritize care. However, flexible tools such as artificial neural networks (ANNs) suffer from a lack of interpretability limiting their acceptability to clinicians. In this work, we propose a novel interpretable Bayesian neural network architecture which offers both the flexibility of ANNs and interpretability in terms of feature selection. In particular, we employ a sparsity inducing prior distribution in a tied manner to learn which features are important for outcome prediction. We evaluate our approach on the task of mortality prediction using two real-world ICU cohorts. In collaboration with clinicians we found that, in addition to the predicted outcome results, our approach can provide novel insights into the importance of different clinical measurements. This suggests that our model can support medical experts in their decision making process.</abstract><doi>10.48550/arxiv.1905.02599</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.1905.02599
ispartof
issn
language eng
recordid cdi_arxiv_primary_1905_02599
source arXiv.org
subjects Computer Science - Learning
Statistics - Machine Learning
title Interpretable Outcome Prediction with Sparse Bayesian Neural Networks in Intensive Care
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-06T11%3A58%3A40IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Interpretable%20Outcome%20Prediction%20with%20Sparse%20Bayesian%20Neural%20Networks%20in%20Intensive%20Care&rft.au=Overweg,%20Hiske&rft.date=2019-05-07&rft_id=info:doi/10.48550/arxiv.1905.02599&rft_dat=%3Carxiv_GOX%3E1905_02599%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true