Responsible AI for cardiovascular disease detection: Towards a privacy-preserving and interpretable model

•Existing cardiovascular disease detection studies emphasize machine learning model performance metrics.•This research contributes to cardiovascular disease detection by prioritizing responsible AI principles, specifically ethics, privacy, security, and transparency.•Interpretable results align with...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Computer methods and programs in biomedicine 2024-09, Vol.254, p.108289, Article 108289
Hauptverfasser: Ferdowsi, Mahbuba, Hasan, Md Mahmudul, Habib, Wafa
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page 108289
container_title Computer methods and programs in biomedicine
container_volume 254
creator Ferdowsi, Mahbuba
Hasan, Md Mahmudul
Habib, Wafa
description •Existing cardiovascular disease detection studies emphasize machine learning model performance metrics.•This research contributes to cardiovascular disease detection by prioritizing responsible AI principles, specifically ethics, privacy, security, and transparency.•Interpretable results align with clinical findings, enhancing confidence in classification results.•With the integration of data anonymization and differential privacy, the research emphasizes the importance of ethical considerations in clinical implementation. Cardiovascular disease (CD) is a major global health concern, affecting millions with symptoms like fatigue and chest discomfort. Timely identification is crucial due to its significant contribution to global mortality. In healthcare, artificial intelligence (AI) holds promise for advancing disease risk assessment and treatment outcome prediction. However, machine learning (ML) evolution raises concerns about data privacy and biases, especially in sensitive healthcare applications. The objective is to develop and implement a responsible AI model for CD prediction that prioritize patient privacy, security, ensuring transparency, explainability, fairness, and ethical adherence in healthcare applications. To predict CD while prioritizing patient privacy, our study employed data anonymization involved adding Laplace noise to sensitive features like age and gender. The anonymized dataset underwent analysis using a differential privacy (DP) framework to preserve data privacy. DP ensured confidentiality while extracting insights. Compared with Logistic Regression (LR), Gaussian Naïve Bayes (GNB), and Random Forest (RF), the methodology integrated feature selection, statistical analysis, and SHapley Additive exPlanations (SHAP) and Local Interpretable Model-agnostic Explanations (LIME) for interpretability. This approach facilitates transparent and interpretable AI decision-making, aligning with responsible AI development principles. Overall, it combines privacy preservation, interpretability, and ethical considerations for accurate CD predictions. Our investigations from the DP framework with LR were promising, with an area under curve (AUC) of 0.848 ± 0.03, an accuracy of 0.797 ± 0.02, precision at 0.789 ± 0.02, recall at 0.797 ± 0.02, and an F1 score of 0.787 ± 0.02, with a comparable performance with the non-privacy framework. The SHAP and LIME based results support clinical findings, show a commitment to transparent and interpretable AI de
doi_str_mv 10.1016/j.cmpb.2024.108289
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_miscellaneous_3071086646</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><els_id>S0169260724002840</els_id><sourcerecordid>3071086646</sourcerecordid><originalsourceid>FETCH-LOGICAL-c237t-fdc72b3845818ce5718488da9a93bfebdbd5576027ffb5e4c3c41afffd8d6a53</originalsourceid><addsrcrecordid>eNp9kEtLxDAUhYMoOj7-gAvJ0k3HNG0eFTcivkAQZPYhTW4kQ9vUpDMy_94MM7p0deHwnQP3Q-iyJPOSlPxmOTf92M4poXUOJJXNAZqVUtBCMM4O0SxDTUE5ESfoNKUlIYQyxo_RSSUbwhopZ8h_QBrDkHzbAb5_xS5EbHS0Pqx1MqtOR2x9Ap0AW5jATD4Mt3gRvjOTsMZj9GttNsUYIUFc--ET68FiP0wQczbp7W4fLHTn6MjpLsHF_p6hxdPj4uGleHt_fn24fysMrcRUOGsEbStZM1lKA0yUspbS6kY3Veugta1lTHBChXMtg9pUpi61c85KyzWrztD1bnaM4WsFaVK9Twa6Tg8QVklVRGRXnNc8o3SHmhhSiuBU_qbXcaNKoraG1VJtDautYbUznEtX-_1V24P9q_wqzcDdDoD85NpDVMl4GAxYH7M_ZYP_b_8HdRePUQ</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3071086646</pqid></control><display><type>article</type><title>Responsible AI for cardiovascular disease detection: Towards a privacy-preserving and interpretable model</title><source>MEDLINE</source><source>ScienceDirect Journals (5 years ago - present)</source><creator>Ferdowsi, Mahbuba ; Hasan, Md Mahmudul ; Habib, Wafa</creator><creatorcontrib>Ferdowsi, Mahbuba ; Hasan, Md Mahmudul ; Habib, Wafa</creatorcontrib><description>•Existing cardiovascular disease detection studies emphasize machine learning model performance metrics.•This research contributes to cardiovascular disease detection by prioritizing responsible AI principles, specifically ethics, privacy, security, and transparency.•Interpretable results align with clinical findings, enhancing confidence in classification results.•With the integration of data anonymization and differential privacy, the research emphasizes the importance of ethical considerations in clinical implementation. Cardiovascular disease (CD) is a major global health concern, affecting millions with symptoms like fatigue and chest discomfort. Timely identification is crucial due to its significant contribution to global mortality. In healthcare, artificial intelligence (AI) holds promise for advancing disease risk assessment and treatment outcome prediction. However, machine learning (ML) evolution raises concerns about data privacy and biases, especially in sensitive healthcare applications. The objective is to develop and implement a responsible AI model for CD prediction that prioritize patient privacy, security, ensuring transparency, explainability, fairness, and ethical adherence in healthcare applications. To predict CD while prioritizing patient privacy, our study employed data anonymization involved adding Laplace noise to sensitive features like age and gender. The anonymized dataset underwent analysis using a differential privacy (DP) framework to preserve data privacy. DP ensured confidentiality while extracting insights. Compared with Logistic Regression (LR), Gaussian Naïve Bayes (GNB), and Random Forest (RF), the methodology integrated feature selection, statistical analysis, and SHapley Additive exPlanations (SHAP) and Local Interpretable Model-agnostic Explanations (LIME) for interpretability. This approach facilitates transparent and interpretable AI decision-making, aligning with responsible AI development principles. Overall, it combines privacy preservation, interpretability, and ethical considerations for accurate CD predictions. Our investigations from the DP framework with LR were promising, with an area under curve (AUC) of 0.848 ± 0.03, an accuracy of 0.797 ± 0.02, precision at 0.789 ± 0.02, recall at 0.797 ± 0.02, and an F1 score of 0.787 ± 0.02, with a comparable performance with the non-privacy framework. The SHAP and LIME based results support clinical findings, show a commitment to transparent and interpretable AI decision-making, and aligns with the principles of responsible AI development. Our study endorses a novel approach in predicting CD, amalgamating data anonymization, privacy-preserving methods, interpretability tools SHAP, LIME, and ethical considerations. This responsible AI framework ensures accurate predictions, privacy preservation, and user trust, underscoring the significance of comprehensive and transparent ML models in healthcare. Therefore, this research empowers the ability to forecast CD, providing a vital lifeline to millions of CD patients globally and potentially preventing numerous fatalities.</description><identifier>ISSN: 0169-2607</identifier><identifier>ISSN: 1872-7565</identifier><identifier>EISSN: 1872-7565</identifier><identifier>DOI: 10.1016/j.cmpb.2024.108289</identifier><identifier>PMID: 38905988</identifier><language>eng</language><publisher>Ireland: Elsevier B.V</publisher><subject>Algorithms ; Artificial Intelligence ; Bayes Theorem ; Cardiovascular disease ; Cardiovascular Diseases - diagnosis ; Confidentiality ; Data Anonymization ; Differential privacy ; Explainable machine learning ; Female ; Humans ; Logistic Models ; Machine Learning ; Male ; Middle Aged ; Privacy ; Responsible artificial intelligence ; Risk Assessment - methods</subject><ispartof>Computer methods and programs in biomedicine, 2024-09, Vol.254, p.108289, Article 108289</ispartof><rights>2024</rights><rights>Copyright © 2024. Published by Elsevier B.V.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c237t-fdc72b3845818ce5718488da9a93bfebdbd5576027ffb5e4c3c41afffd8d6a53</cites><orcidid>0000-0003-3478-4301</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://dx.doi.org/10.1016/j.cmpb.2024.108289$$EHTML$$P50$$Gelsevier$$H</linktohtml><link.rule.ids>314,780,784,3550,27924,27925,45995</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/38905988$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Ferdowsi, Mahbuba</creatorcontrib><creatorcontrib>Hasan, Md Mahmudul</creatorcontrib><creatorcontrib>Habib, Wafa</creatorcontrib><title>Responsible AI for cardiovascular disease detection: Towards a privacy-preserving and interpretable model</title><title>Computer methods and programs in biomedicine</title><addtitle>Comput Methods Programs Biomed</addtitle><description>•Existing cardiovascular disease detection studies emphasize machine learning model performance metrics.•This research contributes to cardiovascular disease detection by prioritizing responsible AI principles, specifically ethics, privacy, security, and transparency.•Interpretable results align with clinical findings, enhancing confidence in classification results.•With the integration of data anonymization and differential privacy, the research emphasizes the importance of ethical considerations in clinical implementation. Cardiovascular disease (CD) is a major global health concern, affecting millions with symptoms like fatigue and chest discomfort. Timely identification is crucial due to its significant contribution to global mortality. In healthcare, artificial intelligence (AI) holds promise for advancing disease risk assessment and treatment outcome prediction. However, machine learning (ML) evolution raises concerns about data privacy and biases, especially in sensitive healthcare applications. The objective is to develop and implement a responsible AI model for CD prediction that prioritize patient privacy, security, ensuring transparency, explainability, fairness, and ethical adherence in healthcare applications. To predict CD while prioritizing patient privacy, our study employed data anonymization involved adding Laplace noise to sensitive features like age and gender. The anonymized dataset underwent analysis using a differential privacy (DP) framework to preserve data privacy. DP ensured confidentiality while extracting insights. Compared with Logistic Regression (LR), Gaussian Naïve Bayes (GNB), and Random Forest (RF), the methodology integrated feature selection, statistical analysis, and SHapley Additive exPlanations (SHAP) and Local Interpretable Model-agnostic Explanations (LIME) for interpretability. This approach facilitates transparent and interpretable AI decision-making, aligning with responsible AI development principles. Overall, it combines privacy preservation, interpretability, and ethical considerations for accurate CD predictions. Our investigations from the DP framework with LR were promising, with an area under curve (AUC) of 0.848 ± 0.03, an accuracy of 0.797 ± 0.02, precision at 0.789 ± 0.02, recall at 0.797 ± 0.02, and an F1 score of 0.787 ± 0.02, with a comparable performance with the non-privacy framework. The SHAP and LIME based results support clinical findings, show a commitment to transparent and interpretable AI decision-making, and aligns with the principles of responsible AI development. Our study endorses a novel approach in predicting CD, amalgamating data anonymization, privacy-preserving methods, interpretability tools SHAP, LIME, and ethical considerations. This responsible AI framework ensures accurate predictions, privacy preservation, and user trust, underscoring the significance of comprehensive and transparent ML models in healthcare. Therefore, this research empowers the ability to forecast CD, providing a vital lifeline to millions of CD patients globally and potentially preventing numerous fatalities.</description><subject>Algorithms</subject><subject>Artificial Intelligence</subject><subject>Bayes Theorem</subject><subject>Cardiovascular disease</subject><subject>Cardiovascular Diseases - diagnosis</subject><subject>Confidentiality</subject><subject>Data Anonymization</subject><subject>Differential privacy</subject><subject>Explainable machine learning</subject><subject>Female</subject><subject>Humans</subject><subject>Logistic Models</subject><subject>Machine Learning</subject><subject>Male</subject><subject>Middle Aged</subject><subject>Privacy</subject><subject>Responsible artificial intelligence</subject><subject>Risk Assessment - methods</subject><issn>0169-2607</issn><issn>1872-7565</issn><issn>1872-7565</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>EIF</sourceid><recordid>eNp9kEtLxDAUhYMoOj7-gAvJ0k3HNG0eFTcivkAQZPYhTW4kQ9vUpDMy_94MM7p0deHwnQP3Q-iyJPOSlPxmOTf92M4poXUOJJXNAZqVUtBCMM4O0SxDTUE5ESfoNKUlIYQyxo_RSSUbwhopZ8h_QBrDkHzbAb5_xS5EbHS0Pqx1MqtOR2x9Ap0AW5jATD4Mt3gRvjOTsMZj9GttNsUYIUFc--ET68FiP0wQczbp7W4fLHTn6MjpLsHF_p6hxdPj4uGleHt_fn24fysMrcRUOGsEbStZM1lKA0yUspbS6kY3Veugta1lTHBChXMtg9pUpi61c85KyzWrztD1bnaM4WsFaVK9Twa6Tg8QVklVRGRXnNc8o3SHmhhSiuBU_qbXcaNKoraG1VJtDautYbUznEtX-_1V24P9q_wqzcDdDoD85NpDVMl4GAxYH7M_ZYP_b_8HdRePUQ</recordid><startdate>202409</startdate><enddate>202409</enddate><creator>Ferdowsi, Mahbuba</creator><creator>Hasan, Md Mahmudul</creator><creator>Habib, Wafa</creator><general>Elsevier B.V</general><scope>CGR</scope><scope>CUY</scope><scope>CVF</scope><scope>ECM</scope><scope>EIF</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0003-3478-4301</orcidid></search><sort><creationdate>202409</creationdate><title>Responsible AI for cardiovascular disease detection: Towards a privacy-preserving and interpretable model</title><author>Ferdowsi, Mahbuba ; Hasan, Md Mahmudul ; Habib, Wafa</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c237t-fdc72b3845818ce5718488da9a93bfebdbd5576027ffb5e4c3c41afffd8d6a53</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Algorithms</topic><topic>Artificial Intelligence</topic><topic>Bayes Theorem</topic><topic>Cardiovascular disease</topic><topic>Cardiovascular Diseases - diagnosis</topic><topic>Confidentiality</topic><topic>Data Anonymization</topic><topic>Differential privacy</topic><topic>Explainable machine learning</topic><topic>Female</topic><topic>Humans</topic><topic>Logistic Models</topic><topic>Machine Learning</topic><topic>Male</topic><topic>Middle Aged</topic><topic>Privacy</topic><topic>Responsible artificial intelligence</topic><topic>Risk Assessment - methods</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Ferdowsi, Mahbuba</creatorcontrib><creatorcontrib>Hasan, Md Mahmudul</creatorcontrib><creatorcontrib>Habib, Wafa</creatorcontrib><collection>Medline</collection><collection>MEDLINE</collection><collection>MEDLINE (Ovid)</collection><collection>MEDLINE</collection><collection>MEDLINE</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>MEDLINE - Academic</collection><jtitle>Computer methods and programs in biomedicine</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Ferdowsi, Mahbuba</au><au>Hasan, Md Mahmudul</au><au>Habib, Wafa</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Responsible AI for cardiovascular disease detection: Towards a privacy-preserving and interpretable model</atitle><jtitle>Computer methods and programs in biomedicine</jtitle><addtitle>Comput Methods Programs Biomed</addtitle><date>2024-09</date><risdate>2024</risdate><volume>254</volume><spage>108289</spage><pages>108289-</pages><artnum>108289</artnum><issn>0169-2607</issn><issn>1872-7565</issn><eissn>1872-7565</eissn><abstract>•Existing cardiovascular disease detection studies emphasize machine learning model performance metrics.•This research contributes to cardiovascular disease detection by prioritizing responsible AI principles, specifically ethics, privacy, security, and transparency.•Interpretable results align with clinical findings, enhancing confidence in classification results.•With the integration of data anonymization and differential privacy, the research emphasizes the importance of ethical considerations in clinical implementation. Cardiovascular disease (CD) is a major global health concern, affecting millions with symptoms like fatigue and chest discomfort. Timely identification is crucial due to its significant contribution to global mortality. In healthcare, artificial intelligence (AI) holds promise for advancing disease risk assessment and treatment outcome prediction. However, machine learning (ML) evolution raises concerns about data privacy and biases, especially in sensitive healthcare applications. The objective is to develop and implement a responsible AI model for CD prediction that prioritize patient privacy, security, ensuring transparency, explainability, fairness, and ethical adherence in healthcare applications. To predict CD while prioritizing patient privacy, our study employed data anonymization involved adding Laplace noise to sensitive features like age and gender. The anonymized dataset underwent analysis using a differential privacy (DP) framework to preserve data privacy. DP ensured confidentiality while extracting insights. Compared with Logistic Regression (LR), Gaussian Naïve Bayes (GNB), and Random Forest (RF), the methodology integrated feature selection, statistical analysis, and SHapley Additive exPlanations (SHAP) and Local Interpretable Model-agnostic Explanations (LIME) for interpretability. This approach facilitates transparent and interpretable AI decision-making, aligning with responsible AI development principles. Overall, it combines privacy preservation, interpretability, and ethical considerations for accurate CD predictions. Our investigations from the DP framework with LR were promising, with an area under curve (AUC) of 0.848 ± 0.03, an accuracy of 0.797 ± 0.02, precision at 0.789 ± 0.02, recall at 0.797 ± 0.02, and an F1 score of 0.787 ± 0.02, with a comparable performance with the non-privacy framework. The SHAP and LIME based results support clinical findings, show a commitment to transparent and interpretable AI decision-making, and aligns with the principles of responsible AI development. Our study endorses a novel approach in predicting CD, amalgamating data anonymization, privacy-preserving methods, interpretability tools SHAP, LIME, and ethical considerations. This responsible AI framework ensures accurate predictions, privacy preservation, and user trust, underscoring the significance of comprehensive and transparent ML models in healthcare. Therefore, this research empowers the ability to forecast CD, providing a vital lifeline to millions of CD patients globally and potentially preventing numerous fatalities.</abstract><cop>Ireland</cop><pub>Elsevier B.V</pub><pmid>38905988</pmid><doi>10.1016/j.cmpb.2024.108289</doi><orcidid>https://orcid.org/0000-0003-3478-4301</orcidid></addata></record>
fulltext fulltext
identifier ISSN: 0169-2607
ispartof Computer methods and programs in biomedicine, 2024-09, Vol.254, p.108289, Article 108289
issn 0169-2607
1872-7565
1872-7565
language eng
recordid cdi_proquest_miscellaneous_3071086646
source MEDLINE; ScienceDirect Journals (5 years ago - present)
subjects Algorithms
Artificial Intelligence
Bayes Theorem
Cardiovascular disease
Cardiovascular Diseases - diagnosis
Confidentiality
Data Anonymization
Differential privacy
Explainable machine learning
Female
Humans
Logistic Models
Machine Learning
Male
Middle Aged
Privacy
Responsible artificial intelligence
Risk Assessment - methods
title Responsible AI for cardiovascular disease detection: Towards a privacy-preserving and interpretable model
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-07T03%3A48%3A11IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Responsible%20AI%20for%20cardiovascular%20disease%20detection:%20Towards%20a%20privacy-preserving%20and%20interpretable%20model&rft.jtitle=Computer%20methods%20and%20programs%20in%20biomedicine&rft.au=Ferdowsi,%20Mahbuba&rft.date=2024-09&rft.volume=254&rft.spage=108289&rft.pages=108289-&rft.artnum=108289&rft.issn=0169-2607&rft.eissn=1872-7565&rft_id=info:doi/10.1016/j.cmpb.2024.108289&rft_dat=%3Cproquest_cross%3E3071086646%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3071086646&rft_id=info:pmid/38905988&rft_els_id=S0169260724002840&rfr_iscdi=true