Interpretability, personalization and reliability of a machine learning based clinical decision support system
Artificial intelligence (AI) has achieved notable performances in many fields and its research impact in healthcare has been unquestionable. Nevertheless, the deployment of such computational models in clinical practice is still limited. Some of the major issues recognized as barriers to a successfu...
Gespeichert in:
Veröffentlicht in: | Data mining and knowledge discovery 2022-05, Vol.36 (3), p.1140-1173 |
---|---|
Hauptverfasser: | , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 1173 |
---|---|
container_issue | 3 |
container_start_page | 1140 |
container_title | Data mining and knowledge discovery |
container_volume | 36 |
creator | Valente, F. Paredes, S. Henriques, J. Rocha, T. de Carvalho, P. Morais, J. |
description | Artificial intelligence (AI) has achieved notable performances in many fields and its research impact in healthcare has been unquestionable. Nevertheless, the deployment of such computational models in clinical practice is still limited. Some of the major issues recognized as barriers to a successful real-world machine learning applications include lack of: transparency; reliability and personalization. Actually, these aspects are decisive not only for patient safety, but also to assure the confidence of professionals. Explainable AI aims at to achieve solutions for artificial intelligence transparency and reliability concerns, with the capacity to better understand and trust a model, providing the ability to justify its outcomes, thus effectively assisting clinicians in rationalizing the model prediction. This work proposes an innovative machine learning based approach, implementing a hybrid scheme, able to combine in a systematic way knowledge-driven and data-driven techniques. In a first step a global set of interpretable rules is generated, founded on clinical evidence. Then, in a second phase, a machine learning model is trained to select, from the global set of rules, the subset that is more appropriate for a given patient, according to his particular characteristics. This approach addresses simultaneously three of the central requirements of explainable AI—interpretability, personalization, and reliability—without impairing the accuracy of the model’s prediction. The scheme was validated with a real dataset provided by two Portuguese Hospitals, the Santa Cruz Hospital, Lisbon, and the Santo André Hospital, Leiria, comprising a total of N = 1111 patients that suffered an acute coronary syndrome event, where the 30 days mortality was assessed. When compared with standard black-box structures (e.g. feedforward neural network) the proposed scheme achieves similar performances, while ensures simultaneously clinical interpretability and personalization of the model, as well as provides a level of reliability to the estimated mortality risk. |
doi_str_mv | 10.1007/s10618-022-00821-8 |
format | Article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2664955589</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2664955589</sourcerecordid><originalsourceid>FETCH-LOGICAL-c319t-1bbad0f332d0af073fcafae9456662ef1c694afc7fe80ddf61e74142891b4baf3</originalsourceid><addsrcrecordid>eNp9kD1PwzAQQCMEElD4A0yWWAmc7cRxRlTxUakSC0hs1sU5F1epE-x0KL-elCKxMd0N7510L8uuONxygOoucVBc5yBEDqAFz_VRdsbLSuZVqd6Pp13qIi81h9PsPKU1AJRCwlkWFmGkOEQasfGdH3c3bKCY-oCd_8LR94FhaFmkzv8CrHcM2Qbthw_EOsIYfFixBhO1zHY-eIsda8n6tLfTdhj6OLK0SyNtLrITh12iy985y94eH17nz_ny5Wkxv1_mVvJ6zHnTYAtOStECOqiks-iQ6qJUSgly3Kq6QGcrRxra1ilOVcELoWveFA06OcuuD3eH2H9uKY1m3W_j9FQyQqmiLstS1xMlDpSNfUqRnBmi32DcGQ5m39Ucupqpq_npavQkyYOUJjisKP6d_sf6BuwOfnI</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2664955589</pqid></control><display><type>article</type><title>Interpretability, personalization and reliability of a machine learning based clinical decision support system</title><source>SpringerLink Journals</source><creator>Valente, F. ; Paredes, S. ; Henriques, J. ; Rocha, T. ; de Carvalho, P. ; Morais, J.</creator><creatorcontrib>Valente, F. ; Paredes, S. ; Henriques, J. ; Rocha, T. ; de Carvalho, P. ; Morais, J.</creatorcontrib><description>Artificial intelligence (AI) has achieved notable performances in many fields and its research impact in healthcare has been unquestionable. Nevertheless, the deployment of such computational models in clinical practice is still limited. Some of the major issues recognized as barriers to a successful real-world machine learning applications include lack of: transparency; reliability and personalization. Actually, these aspects are decisive not only for patient safety, but also to assure the confidence of professionals. Explainable AI aims at to achieve solutions for artificial intelligence transparency and reliability concerns, with the capacity to better understand and trust a model, providing the ability to justify its outcomes, thus effectively assisting clinicians in rationalizing the model prediction. This work proposes an innovative machine learning based approach, implementing a hybrid scheme, able to combine in a systematic way knowledge-driven and data-driven techniques. In a first step a global set of interpretable rules is generated, founded on clinical evidence. Then, in a second phase, a machine learning model is trained to select, from the global set of rules, the subset that is more appropriate for a given patient, according to his particular characteristics. This approach addresses simultaneously three of the central requirements of explainable AI—interpretability, personalization, and reliability—without impairing the accuracy of the model’s prediction. The scheme was validated with a real dataset provided by two Portuguese Hospitals, the Santa Cruz Hospital, Lisbon, and the Santo André Hospital, Leiria, comprising a total of N = 1111 patients that suffered an acute coronary syndrome event, where the 30 days mortality was assessed. When compared with standard black-box structures (e.g. feedforward neural network) the proposed scheme achieves similar performances, while ensures simultaneously clinical interpretability and personalization of the model, as well as provides a level of reliability to the estimated mortality risk.</description><identifier>ISSN: 1384-5810</identifier><identifier>EISSN: 1573-756X</identifier><identifier>DOI: 10.1007/s10618-022-00821-8</identifier><language>eng</language><publisher>New York: Springer US</publisher><subject>Acute coronary syndromes ; Artificial Intelligence ; Artificial neural networks ; Chemistry and Earth Sciences ; Computer Science ; Customization ; Data Mining and Knowledge Discovery ; Decision support systems ; Explainable artificial intelligence ; Information Storage and Retrieval ; Machine learning ; Model accuracy ; Mortality ; Physics ; Reliability aspects ; Special Issue on Explainable and Interpretable Machine Learning and Data Mining ; Statistics for Engineering</subject><ispartof>Data mining and knowledge discovery, 2022-05, Vol.36 (3), p.1140-1173</ispartof><rights>The Author(s), under exclusive licence to Springer Science+Business Media LLC, part of Springer Nature 2022</rights><rights>The Author(s), under exclusive licence to Springer Science+Business Media LLC, part of Springer Nature 2022.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c319t-1bbad0f332d0af073fcafae9456662ef1c694afc7fe80ddf61e74142891b4baf3</citedby><cites>FETCH-LOGICAL-c319t-1bbad0f332d0af073fcafae9456662ef1c694afc7fe80ddf61e74142891b4baf3</cites><orcidid>0000-0001-6964-9391</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s10618-022-00821-8$$EPDF$$P50$$Gspringer$$H</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1007/s10618-022-00821-8$$EHTML$$P50$$Gspringer$$H</linktohtml><link.rule.ids>314,777,781,27905,27906,41469,42538,51300</link.rule.ids></links><search><creatorcontrib>Valente, F.</creatorcontrib><creatorcontrib>Paredes, S.</creatorcontrib><creatorcontrib>Henriques, J.</creatorcontrib><creatorcontrib>Rocha, T.</creatorcontrib><creatorcontrib>de Carvalho, P.</creatorcontrib><creatorcontrib>Morais, J.</creatorcontrib><title>Interpretability, personalization and reliability of a machine learning based clinical decision support system</title><title>Data mining and knowledge discovery</title><addtitle>Data Min Knowl Disc</addtitle><description>Artificial intelligence (AI) has achieved notable performances in many fields and its research impact in healthcare has been unquestionable. Nevertheless, the deployment of such computational models in clinical practice is still limited. Some of the major issues recognized as barriers to a successful real-world machine learning applications include lack of: transparency; reliability and personalization. Actually, these aspects are decisive not only for patient safety, but also to assure the confidence of professionals. Explainable AI aims at to achieve solutions for artificial intelligence transparency and reliability concerns, with the capacity to better understand and trust a model, providing the ability to justify its outcomes, thus effectively assisting clinicians in rationalizing the model prediction. This work proposes an innovative machine learning based approach, implementing a hybrid scheme, able to combine in a systematic way knowledge-driven and data-driven techniques. In a first step a global set of interpretable rules is generated, founded on clinical evidence. Then, in a second phase, a machine learning model is trained to select, from the global set of rules, the subset that is more appropriate for a given patient, according to his particular characteristics. This approach addresses simultaneously three of the central requirements of explainable AI—interpretability, personalization, and reliability—without impairing the accuracy of the model’s prediction. The scheme was validated with a real dataset provided by two Portuguese Hospitals, the Santa Cruz Hospital, Lisbon, and the Santo André Hospital, Leiria, comprising a total of N = 1111 patients that suffered an acute coronary syndrome event, where the 30 days mortality was assessed. When compared with standard black-box structures (e.g. feedforward neural network) the proposed scheme achieves similar performances, while ensures simultaneously clinical interpretability and personalization of the model, as well as provides a level of reliability to the estimated mortality risk.</description><subject>Acute coronary syndromes</subject><subject>Artificial Intelligence</subject><subject>Artificial neural networks</subject><subject>Chemistry and Earth Sciences</subject><subject>Computer Science</subject><subject>Customization</subject><subject>Data Mining and Knowledge Discovery</subject><subject>Decision support systems</subject><subject>Explainable artificial intelligence</subject><subject>Information Storage and Retrieval</subject><subject>Machine learning</subject><subject>Model accuracy</subject><subject>Mortality</subject><subject>Physics</subject><subject>Reliability aspects</subject><subject>Special Issue on Explainable and Interpretable Machine Learning and Data Mining</subject><subject>Statistics for Engineering</subject><issn>1384-5810</issn><issn>1573-756X</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>8G5</sourceid><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GNUQQ</sourceid><sourceid>GUQSH</sourceid><sourceid>M2O</sourceid><recordid>eNp9kD1PwzAQQCMEElD4A0yWWAmc7cRxRlTxUakSC0hs1sU5F1epE-x0KL-elCKxMd0N7510L8uuONxygOoucVBc5yBEDqAFz_VRdsbLSuZVqd6Pp13qIi81h9PsPKU1AJRCwlkWFmGkOEQasfGdH3c3bKCY-oCd_8LR94FhaFmkzv8CrHcM2Qbthw_EOsIYfFixBhO1zHY-eIsda8n6tLfTdhj6OLK0SyNtLrITh12iy985y94eH17nz_ny5Wkxv1_mVvJ6zHnTYAtOStECOqiks-iQ6qJUSgly3Kq6QGcrRxra1ilOVcELoWveFA06OcuuD3eH2H9uKY1m3W_j9FQyQqmiLstS1xMlDpSNfUqRnBmi32DcGQ5m39Ucupqpq_npavQkyYOUJjisKP6d_sf6BuwOfnI</recordid><startdate>20220501</startdate><enddate>20220501</enddate><creator>Valente, F.</creator><creator>Paredes, S.</creator><creator>Henriques, J.</creator><creator>Rocha, T.</creator><creator>de Carvalho, P.</creator><creator>Morais, J.</creator><general>Springer US</general><general>Springer Nature B.V</general><scope>AAYXX</scope><scope>CITATION</scope><scope>3V.</scope><scope>7SC</scope><scope>7WY</scope><scope>7WZ</scope><scope>7XB</scope><scope>87Z</scope><scope>8AL</scope><scope>8AO</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>8FK</scope><scope>8FL</scope><scope>8G5</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BEZIV</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>FRNLG</scope><scope>F~G</scope><scope>GNUQQ</scope><scope>GUQSH</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K60</scope><scope>K6~</scope><scope>K7-</scope><scope>L.-</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>M0C</scope><scope>M0N</scope><scope>M2O</scope><scope>MBDVC</scope><scope>P5Z</scope><scope>P62</scope><scope>PQBIZ</scope><scope>PQBZA</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>Q9U</scope><orcidid>https://orcid.org/0000-0001-6964-9391</orcidid></search><sort><creationdate>20220501</creationdate><title>Interpretability, personalization and reliability of a machine learning based clinical decision support system</title><author>Valente, F. ; Paredes, S. ; Henriques, J. ; Rocha, T. ; de Carvalho, P. ; Morais, J.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c319t-1bbad0f332d0af073fcafae9456662ef1c694afc7fe80ddf61e74142891b4baf3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Acute coronary syndromes</topic><topic>Artificial Intelligence</topic><topic>Artificial neural networks</topic><topic>Chemistry and Earth Sciences</topic><topic>Computer Science</topic><topic>Customization</topic><topic>Data Mining and Knowledge Discovery</topic><topic>Decision support systems</topic><topic>Explainable artificial intelligence</topic><topic>Information Storage and Retrieval</topic><topic>Machine learning</topic><topic>Model accuracy</topic><topic>Mortality</topic><topic>Physics</topic><topic>Reliability aspects</topic><topic>Special Issue on Explainable and Interpretable Machine Learning and Data Mining</topic><topic>Statistics for Engineering</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Valente, F.</creatorcontrib><creatorcontrib>Paredes, S.</creatorcontrib><creatorcontrib>Henriques, J.</creatorcontrib><creatorcontrib>Rocha, T.</creatorcontrib><creatorcontrib>de Carvalho, P.</creatorcontrib><creatorcontrib>Morais, J.</creatorcontrib><collection>CrossRef</collection><collection>ProQuest Central (Corporate)</collection><collection>Computer and Information Systems Abstracts</collection><collection>ABI/INFORM Collection</collection><collection>ABI/INFORM Global (PDF only)</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>ABI/INFORM Global (Alumni Edition)</collection><collection>Computing Database (Alumni Edition)</collection><collection>ProQuest Pharma Collection</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>ABI/INFORM Collection (Alumni Edition)</collection><collection>Research Library (Alumni Edition)</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies & Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Business Premium Collection</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>Business Premium Collection (Alumni)</collection><collection>ABI/INFORM Global (Corporate)</collection><collection>ProQuest Central Student</collection><collection>Research Library Prep</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Computer Science Collection</collection><collection>ProQuest Business Collection (Alumni Edition)</collection><collection>ProQuest Business Collection</collection><collection>Computer Science Database</collection><collection>ABI/INFORM Professional Advanced</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>ABI/INFORM Global</collection><collection>Computing Database</collection><collection>Research Library</collection><collection>Research Library (Corporate)</collection><collection>Advanced Technologies & Aerospace Database</collection><collection>ProQuest Advanced Technologies & Aerospace Collection</collection><collection>ProQuest One Business</collection><collection>ProQuest One Business (Alumni)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central Basic</collection><jtitle>Data mining and knowledge discovery</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Valente, F.</au><au>Paredes, S.</au><au>Henriques, J.</au><au>Rocha, T.</au><au>de Carvalho, P.</au><au>Morais, J.</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Interpretability, personalization and reliability of a machine learning based clinical decision support system</atitle><jtitle>Data mining and knowledge discovery</jtitle><stitle>Data Min Knowl Disc</stitle><date>2022-05-01</date><risdate>2022</risdate><volume>36</volume><issue>3</issue><spage>1140</spage><epage>1173</epage><pages>1140-1173</pages><issn>1384-5810</issn><eissn>1573-756X</eissn><abstract>Artificial intelligence (AI) has achieved notable performances in many fields and its research impact in healthcare has been unquestionable. Nevertheless, the deployment of such computational models in clinical practice is still limited. Some of the major issues recognized as barriers to a successful real-world machine learning applications include lack of: transparency; reliability and personalization. Actually, these aspects are decisive not only for patient safety, but also to assure the confidence of professionals. Explainable AI aims at to achieve solutions for artificial intelligence transparency and reliability concerns, with the capacity to better understand and trust a model, providing the ability to justify its outcomes, thus effectively assisting clinicians in rationalizing the model prediction. This work proposes an innovative machine learning based approach, implementing a hybrid scheme, able to combine in a systematic way knowledge-driven and data-driven techniques. In a first step a global set of interpretable rules is generated, founded on clinical evidence. Then, in a second phase, a machine learning model is trained to select, from the global set of rules, the subset that is more appropriate for a given patient, according to his particular characteristics. This approach addresses simultaneously three of the central requirements of explainable AI—interpretability, personalization, and reliability—without impairing the accuracy of the model’s prediction. The scheme was validated with a real dataset provided by two Portuguese Hospitals, the Santa Cruz Hospital, Lisbon, and the Santo André Hospital, Leiria, comprising a total of N = 1111 patients that suffered an acute coronary syndrome event, where the 30 days mortality was assessed. When compared with standard black-box structures (e.g. feedforward neural network) the proposed scheme achieves similar performances, while ensures simultaneously clinical interpretability and personalization of the model, as well as provides a level of reliability to the estimated mortality risk.</abstract><cop>New York</cop><pub>Springer US</pub><doi>10.1007/s10618-022-00821-8</doi><tpages>34</tpages><orcidid>https://orcid.org/0000-0001-6964-9391</orcidid></addata></record> |
fulltext | fulltext |
identifier | ISSN: 1384-5810 |
ispartof | Data mining and knowledge discovery, 2022-05, Vol.36 (3), p.1140-1173 |
issn | 1384-5810 1573-756X |
language | eng |
recordid | cdi_proquest_journals_2664955589 |
source | SpringerLink Journals |
subjects | Acute coronary syndromes Artificial Intelligence Artificial neural networks Chemistry and Earth Sciences Computer Science Customization Data Mining and Knowledge Discovery Decision support systems Explainable artificial intelligence Information Storage and Retrieval Machine learning Model accuracy Mortality Physics Reliability aspects Special Issue on Explainable and Interpretable Machine Learning and Data Mining Statistics for Engineering |
title | Interpretability, personalization and reliability of a machine learning based clinical decision support system |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-20T01%3A47%3A33IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Interpretability,%20personalization%20and%20reliability%20of%20a%20machine%20learning%20based%20clinical%20decision%20support%20system&rft.jtitle=Data%20mining%20and%20knowledge%20discovery&rft.au=Valente,%20F.&rft.date=2022-05-01&rft.volume=36&rft.issue=3&rft.spage=1140&rft.epage=1173&rft.pages=1140-1173&rft.issn=1384-5810&rft.eissn=1573-756X&rft_id=info:doi/10.1007/s10618-022-00821-8&rft_dat=%3Cproquest_cross%3E2664955589%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2664955589&rft_id=info:pmid/&rfr_iscdi=true |