Survey of explainable artificial intelligence techniques for biomedical imaging with deep neural networks
Artificial Intelligence (AI) techniques of deep learning have revolutionized the disease diagnosis with their outstanding image classification performance. In spite of the outstanding results, the widespread adoption of these techniques in clinical practice is still taking place at a moderate pace....
Gespeichert in:
Veröffentlicht in: | Computers in biology and medicine 2023-04, Vol.156 (C), p.106668-106668, Article 106668 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 106668 |
---|---|
container_issue | C |
container_start_page | 106668 |
container_title | Computers in biology and medicine |
container_volume | 156 |
creator | Nazir, Sajid Dickson, Diane M. Akram, Muhammad Usman |
description | Artificial Intelligence (AI) techniques of deep learning have revolutionized the disease diagnosis with their outstanding image classification performance. In spite of the outstanding results, the widespread adoption of these techniques in clinical practice is still taking place at a moderate pace. One of the major hindrance is that a trained Deep Neural Networks (DNN) model provides a prediction, but questions about why and how that prediction was made remain unanswered. This linkage is of utmost importance for the regulated healthcare domain to increase the trust in the automated diagnosis system by the practitioners, patients and other stakeholders. The application of deep learning for medical imaging has to be interpreted with caution due to the health and safety concerns similar to blame attribution in the case of an accident involving autonomous cars. The consequences of both a false positive and false negative cases are far reaching for patients' welfare and cannot be ignored. This is exacerbated by the fact that the state-of-the-art deep learning algorithms comprise of complex interconnected structures, millions of parameters, and a ‘black box’ nature, offering little understanding of their inner working unlike the traditional machine learning algorithms. Explainable AI (XAI) techniques help to understand model predictions which help develop trust in the system, accelerate the disease diagnosis, and meet adherence to regulatory requirements.
This survey provides a comprehensive review of the promising field of XAI for biomedical imaging diagnostics. We also provide a categorization of the XAI techniques, discuss the open challenges, and provide future directions for XAI which would be of interest to clinicians, regulators and model developers.
[Display omitted]
•A survey of the Explainable Artificial Intelligence (XAI) techniques for biomedical imaging.•The XAI techniques are categorized depending on the scope, applicability, and usage.•XAI techniques are covered from the perspective of medical specialties and imaging modalities.•The research challenges and directions for XAI applications in medical imaging are provided. |
doi_str_mv | 10.1016/j.compbiomed.2023.106668 |
format | Article |
fullrecord | <record><control><sourceid>proquest_osti_</sourceid><recordid>TN_cdi_osti_scitechconnect_1960887</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><els_id>S0010482523001336</els_id><sourcerecordid>2783493512</sourcerecordid><originalsourceid>FETCH-LOGICAL-c479t-5a88dbf7ceea52830ea10cdc1ddfaf4648994faaf3a9f2cf739a192f1adbbef03</originalsourceid><addsrcrecordid>eNqFkcFuFSEUhonR2NvqKxiiGzdzhWGGgaU2ak2auFDXhIHDLde5MMJMa99eyLQxceOKhPMdOP_5EMKU7Cmh_N1xb-JpHn08gd23pGXlmnMunqAdFYNsSM-6p2hHCCVNJ9r-DJ3nfCSEdISR5-iMccEZle0O-W9ruoV7HB2G3_OkfdDjBFinxTtvvJ6wDwtMkz9AMIAXMDfB_1ohYxcT3ibwpmInffDhgO_8coMtwIwDrKkUAix3Mf3ML9Azp6cMLx_OC_Tj08fvl1fN9dfPXy7fXzemG-TS9FoIO7rBAOi-FYyApsRYQ6112nW8E1J2TmvHtHStcQOTuiRxVNtxBEfYBXq9vRvz4lU2vs5sYghgFkUlJ0IMBXq7QXOKNc2iTj6bklMHiGtW7SBYJ1lP24K--Qc9xjWFEqFSPRWi56JQYqNMijkncGpOZSXpXlGiqjN1VH-dqepMbc5K66uHD9ax1h4bHyUV4MMGQFnbrYdUU1Ud1qcaykb__1_-APflr5o</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2785188568</pqid></control><display><type>article</type><title>Survey of explainable artificial intelligence techniques for biomedical imaging with deep neural networks</title><source>MEDLINE</source><source>Access via ScienceDirect (Elsevier)</source><source>ProQuest Central UK/Ireland</source><creator>Nazir, Sajid ; Dickson, Diane M. ; Akram, Muhammad Usman</creator><creatorcontrib>Nazir, Sajid ; Dickson, Diane M. ; Akram, Muhammad Usman</creatorcontrib><description>Artificial Intelligence (AI) techniques of deep learning have revolutionized the disease diagnosis with their outstanding image classification performance. In spite of the outstanding results, the widespread adoption of these techniques in clinical practice is still taking place at a moderate pace. One of the major hindrance is that a trained Deep Neural Networks (DNN) model provides a prediction, but questions about why and how that prediction was made remain unanswered. This linkage is of utmost importance for the regulated healthcare domain to increase the trust in the automated diagnosis system by the practitioners, patients and other stakeholders. The application of deep learning for medical imaging has to be interpreted with caution due to the health and safety concerns similar to blame attribution in the case of an accident involving autonomous cars. The consequences of both a false positive and false negative cases are far reaching for patients' welfare and cannot be ignored. This is exacerbated by the fact that the state-of-the-art deep learning algorithms comprise of complex interconnected structures, millions of parameters, and a ‘black box’ nature, offering little understanding of their inner working unlike the traditional machine learning algorithms. Explainable AI (XAI) techniques help to understand model predictions which help develop trust in the system, accelerate the disease diagnosis, and meet adherence to regulatory requirements.
This survey provides a comprehensive review of the promising field of XAI for biomedical imaging diagnostics. We also provide a categorization of the XAI techniques, discuss the open challenges, and provide future directions for XAI which would be of interest to clinicians, regulators and model developers.
[Display omitted]
•A survey of the Explainable Artificial Intelligence (XAI) techniques for biomedical imaging.•The XAI techniques are categorized depending on the scope, applicability, and usage.•XAI techniques are covered from the perspective of medical specialties and imaging modalities.•The research challenges and directions for XAI applications in medical imaging are provided.</description><identifier>ISSN: 0010-4825</identifier><identifier>EISSN: 1879-0534</identifier><identifier>DOI: 10.1016/j.compbiomed.2023.106668</identifier><identifier>PMID: 36863192</identifier><language>eng</language><publisher>United States: Elsevier Ltd</publisher><subject>Algorithms ; Artificial Intelligence ; Artificial neural networks ; Autonomous cars ; Backpropagation ; Blackbox ; Brain ; Classification ; Clinical medicine ; Collision avoidance ; Coronaviruses ; COVID-19 ; Decision making ; Deep learning ; Diagnosis ; Diagnostic Imaging ; Disease ; Explainable artificial intelligence ; Features ; Humans ; Image classification ; Interpretable AI ; Learning algorithms ; Machine Learning ; Medical diagnosis ; Medical electronics ; Medical imaging ; Neural networks ; Neural Networks, Computer ; Patients ; Predictions ; Predictive models ; Subject specialists ; Supervised learning ; Surveys</subject><ispartof>Computers in biology and medicine, 2023-04, Vol.156 (C), p.106668-106668, Article 106668</ispartof><rights>2023 The Authors</rights><rights>Copyright © 2023 The Authors. Published by Elsevier Ltd.. All rights reserved.</rights><rights>2023. The Authors</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c479t-5a88dbf7ceea52830ea10cdc1ddfaf4648994faaf3a9f2cf739a192f1adbbef03</citedby><cites>FETCH-LOGICAL-c479t-5a88dbf7ceea52830ea10cdc1ddfaf4648994faaf3a9f2cf739a192f1adbbef03</cites><orcidid>0000-0001-7945-212X ; 0000-0002-9618-0426 ; 0000000296180426 ; 000000017945212X</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.proquest.com/docview/2785188568?pq-origsite=primo$$EHTML$$P50$$Gproquest$$H</linktohtml><link.rule.ids>230,314,780,784,885,3550,27924,27925,45995,64385,64387,64389,72469</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/36863192$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink><backlink>$$Uhttps://www.osti.gov/biblio/1960887$$D View this record in Osti.gov$$Hfree_for_read</backlink></links><search><creatorcontrib>Nazir, Sajid</creatorcontrib><creatorcontrib>Dickson, Diane M.</creatorcontrib><creatorcontrib>Akram, Muhammad Usman</creatorcontrib><title>Survey of explainable artificial intelligence techniques for biomedical imaging with deep neural networks</title><title>Computers in biology and medicine</title><addtitle>Comput Biol Med</addtitle><description>Artificial Intelligence (AI) techniques of deep learning have revolutionized the disease diagnosis with their outstanding image classification performance. In spite of the outstanding results, the widespread adoption of these techniques in clinical practice is still taking place at a moderate pace. One of the major hindrance is that a trained Deep Neural Networks (DNN) model provides a prediction, but questions about why and how that prediction was made remain unanswered. This linkage is of utmost importance for the regulated healthcare domain to increase the trust in the automated diagnosis system by the practitioners, patients and other stakeholders. The application of deep learning for medical imaging has to be interpreted with caution due to the health and safety concerns similar to blame attribution in the case of an accident involving autonomous cars. The consequences of both a false positive and false negative cases are far reaching for patients' welfare and cannot be ignored. This is exacerbated by the fact that the state-of-the-art deep learning algorithms comprise of complex interconnected structures, millions of parameters, and a ‘black box’ nature, offering little understanding of their inner working unlike the traditional machine learning algorithms. Explainable AI (XAI) techniques help to understand model predictions which help develop trust in the system, accelerate the disease diagnosis, and meet adherence to regulatory requirements.
This survey provides a comprehensive review of the promising field of XAI for biomedical imaging diagnostics. We also provide a categorization of the XAI techniques, discuss the open challenges, and provide future directions for XAI which would be of interest to clinicians, regulators and model developers.
[Display omitted]
•A survey of the Explainable Artificial Intelligence (XAI) techniques for biomedical imaging.•The XAI techniques are categorized depending on the scope, applicability, and usage.•XAI techniques are covered from the perspective of medical specialties and imaging modalities.•The research challenges and directions for XAI applications in medical imaging are provided.</description><subject>Algorithms</subject><subject>Artificial Intelligence</subject><subject>Artificial neural networks</subject><subject>Autonomous cars</subject><subject>Backpropagation</subject><subject>Blackbox</subject><subject>Brain</subject><subject>Classification</subject><subject>Clinical medicine</subject><subject>Collision avoidance</subject><subject>Coronaviruses</subject><subject>COVID-19</subject><subject>Decision making</subject><subject>Deep learning</subject><subject>Diagnosis</subject><subject>Diagnostic Imaging</subject><subject>Disease</subject><subject>Explainable artificial intelligence</subject><subject>Features</subject><subject>Humans</subject><subject>Image classification</subject><subject>Interpretable AI</subject><subject>Learning algorithms</subject><subject>Machine Learning</subject><subject>Medical diagnosis</subject><subject>Medical electronics</subject><subject>Medical imaging</subject><subject>Neural networks</subject><subject>Neural Networks, Computer</subject><subject>Patients</subject><subject>Predictions</subject><subject>Predictive models</subject><subject>Subject specialists</subject><subject>Supervised learning</subject><subject>Surveys</subject><issn>0010-4825</issn><issn>1879-0534</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>EIF</sourceid><sourceid>8G5</sourceid><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GNUQQ</sourceid><sourceid>GUQSH</sourceid><sourceid>M2O</sourceid><recordid>eNqFkcFuFSEUhonR2NvqKxiiGzdzhWGGgaU2ak2auFDXhIHDLde5MMJMa99eyLQxceOKhPMdOP_5EMKU7Cmh_N1xb-JpHn08gd23pGXlmnMunqAdFYNsSM-6p2hHCCVNJ9r-DJ3nfCSEdISR5-iMccEZle0O-W9ruoV7HB2G3_OkfdDjBFinxTtvvJ6wDwtMkz9AMIAXMDfB_1ohYxcT3ibwpmInffDhgO_8coMtwIwDrKkUAix3Mf3ML9Azp6cMLx_OC_Tj08fvl1fN9dfPXy7fXzemG-TS9FoIO7rBAOi-FYyApsRYQ6112nW8E1J2TmvHtHStcQOTuiRxVNtxBEfYBXq9vRvz4lU2vs5sYghgFkUlJ0IMBXq7QXOKNc2iTj6bklMHiGtW7SBYJ1lP24K--Qc9xjWFEqFSPRWi56JQYqNMijkncGpOZSXpXlGiqjN1VH-dqepMbc5K66uHD9ax1h4bHyUV4MMGQFnbrYdUU1Ud1qcaykb__1_-APflr5o</recordid><startdate>202304</startdate><enddate>202304</enddate><creator>Nazir, Sajid</creator><creator>Dickson, Diane M.</creator><creator>Akram, Muhammad Usman</creator><general>Elsevier Ltd</general><general>Elsevier Limited</general><general>Elsevier</general><scope>6I.</scope><scope>AAFTH</scope><scope>CGR</scope><scope>CUY</scope><scope>CVF</scope><scope>ECM</scope><scope>EIF</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>3V.</scope><scope>7RV</scope><scope>7X7</scope><scope>7XB</scope><scope>88E</scope><scope>8AL</scope><scope>8AO</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>8FH</scope><scope>8FI</scope><scope>8FJ</scope><scope>8FK</scope><scope>8G5</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BBNVY</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>BHPHI</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>FR3</scope><scope>FYUFA</scope><scope>GHDGH</scope><scope>GNUQQ</scope><scope>GUQSH</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K7-</scope><scope>K9.</scope><scope>KB0</scope><scope>LK8</scope><scope>M0N</scope><scope>M0S</scope><scope>M1P</scope><scope>M2O</scope><scope>M7P</scope><scope>M7Z</scope><scope>MBDVC</scope><scope>NAPCQ</scope><scope>P5Z</scope><scope>P62</scope><scope>P64</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>Q9U</scope><scope>7X8</scope><scope>OTOTI</scope><orcidid>https://orcid.org/0000-0001-7945-212X</orcidid><orcidid>https://orcid.org/0000-0002-9618-0426</orcidid><orcidid>https://orcid.org/0000000296180426</orcidid><orcidid>https://orcid.org/000000017945212X</orcidid></search><sort><creationdate>202304</creationdate><title>Survey of explainable artificial intelligence techniques for biomedical imaging with deep neural networks</title><author>Nazir, Sajid ; Dickson, Diane M. ; Akram, Muhammad Usman</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c479t-5a88dbf7ceea52830ea10cdc1ddfaf4648994faaf3a9f2cf739a192f1adbbef03</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Algorithms</topic><topic>Artificial Intelligence</topic><topic>Artificial neural networks</topic><topic>Autonomous cars</topic><topic>Backpropagation</topic><topic>Blackbox</topic><topic>Brain</topic><topic>Classification</topic><topic>Clinical medicine</topic><topic>Collision avoidance</topic><topic>Coronaviruses</topic><topic>COVID-19</topic><topic>Decision making</topic><topic>Deep learning</topic><topic>Diagnosis</topic><topic>Diagnostic Imaging</topic><topic>Disease</topic><topic>Explainable artificial intelligence</topic><topic>Features</topic><topic>Humans</topic><topic>Image classification</topic><topic>Interpretable AI</topic><topic>Learning algorithms</topic><topic>Machine Learning</topic><topic>Medical diagnosis</topic><topic>Medical electronics</topic><topic>Medical imaging</topic><topic>Neural networks</topic><topic>Neural Networks, Computer</topic><topic>Patients</topic><topic>Predictions</topic><topic>Predictive models</topic><topic>Subject specialists</topic><topic>Supervised learning</topic><topic>Surveys</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Nazir, Sajid</creatorcontrib><creatorcontrib>Dickson, Diane M.</creatorcontrib><creatorcontrib>Akram, Muhammad Usman</creatorcontrib><collection>ScienceDirect Open Access Titles</collection><collection>Elsevier:ScienceDirect:Open Access</collection><collection>Medline</collection><collection>MEDLINE</collection><collection>MEDLINE (Ovid)</collection><collection>MEDLINE</collection><collection>MEDLINE</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>ProQuest Central (Corporate)</collection><collection>Nursing & Allied Health Database</collection><collection>Health & Medical Collection</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>Medical Database (Alumni Edition)</collection><collection>Computing Database (Alumni Edition)</collection><collection>ProQuest Pharma Collection</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Natural Science Collection</collection><collection>Hospital Premium Collection</collection><collection>Hospital Premium Collection (Alumni Edition)</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>Research Library (Alumni Edition)</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies & Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>Biological Science Collection</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>Natural Science Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>Engineering Research Database</collection><collection>Health Research Premium Collection</collection><collection>Health Research Premium Collection (Alumni)</collection><collection>ProQuest Central Student</collection><collection>Research Library Prep</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Computer Science Collection</collection><collection>Computer Science Database</collection><collection>ProQuest Health & Medical Complete (Alumni)</collection><collection>Nursing & Allied Health Database (Alumni Edition)</collection><collection>ProQuest Biological Science Collection</collection><collection>Computing Database</collection><collection>Health & Medical Collection (Alumni Edition)</collection><collection>Medical Database</collection><collection>Research Library</collection><collection>Biological Science Database</collection><collection>Biochemistry Abstracts 1</collection><collection>Research Library (Corporate)</collection><collection>Nursing & Allied Health Premium</collection><collection>Advanced Technologies & Aerospace Database</collection><collection>ProQuest Advanced Technologies & Aerospace Collection</collection><collection>Biotechnology and BioEngineering Abstracts</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>ProQuest Central Basic</collection><collection>MEDLINE - Academic</collection><collection>OSTI.GOV</collection><jtitle>Computers in biology and medicine</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Nazir, Sajid</au><au>Dickson, Diane M.</au><au>Akram, Muhammad Usman</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Survey of explainable artificial intelligence techniques for biomedical imaging with deep neural networks</atitle><jtitle>Computers in biology and medicine</jtitle><addtitle>Comput Biol Med</addtitle><date>2023-04</date><risdate>2023</risdate><volume>156</volume><issue>C</issue><spage>106668</spage><epage>106668</epage><pages>106668-106668</pages><artnum>106668</artnum><issn>0010-4825</issn><eissn>1879-0534</eissn><abstract>Artificial Intelligence (AI) techniques of deep learning have revolutionized the disease diagnosis with their outstanding image classification performance. In spite of the outstanding results, the widespread adoption of these techniques in clinical practice is still taking place at a moderate pace. One of the major hindrance is that a trained Deep Neural Networks (DNN) model provides a prediction, but questions about why and how that prediction was made remain unanswered. This linkage is of utmost importance for the regulated healthcare domain to increase the trust in the automated diagnosis system by the practitioners, patients and other stakeholders. The application of deep learning for medical imaging has to be interpreted with caution due to the health and safety concerns similar to blame attribution in the case of an accident involving autonomous cars. The consequences of both a false positive and false negative cases are far reaching for patients' welfare and cannot be ignored. This is exacerbated by the fact that the state-of-the-art deep learning algorithms comprise of complex interconnected structures, millions of parameters, and a ‘black box’ nature, offering little understanding of their inner working unlike the traditional machine learning algorithms. Explainable AI (XAI) techniques help to understand model predictions which help develop trust in the system, accelerate the disease diagnosis, and meet adherence to regulatory requirements.
This survey provides a comprehensive review of the promising field of XAI for biomedical imaging diagnostics. We also provide a categorization of the XAI techniques, discuss the open challenges, and provide future directions for XAI which would be of interest to clinicians, regulators and model developers.
[Display omitted]
•A survey of the Explainable Artificial Intelligence (XAI) techniques for biomedical imaging.•The XAI techniques are categorized depending on the scope, applicability, and usage.•XAI techniques are covered from the perspective of medical specialties and imaging modalities.•The research challenges and directions for XAI applications in medical imaging are provided.</abstract><cop>United States</cop><pub>Elsevier Ltd</pub><pmid>36863192</pmid><doi>10.1016/j.compbiomed.2023.106668</doi><tpages>1</tpages><orcidid>https://orcid.org/0000-0001-7945-212X</orcidid><orcidid>https://orcid.org/0000-0002-9618-0426</orcidid><orcidid>https://orcid.org/0000000296180426</orcidid><orcidid>https://orcid.org/000000017945212X</orcidid><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 0010-4825 |
ispartof | Computers in biology and medicine, 2023-04, Vol.156 (C), p.106668-106668, Article 106668 |
issn | 0010-4825 1879-0534 |
language | eng |
recordid | cdi_osti_scitechconnect_1960887 |
source | MEDLINE; Access via ScienceDirect (Elsevier); ProQuest Central UK/Ireland |
subjects | Algorithms Artificial Intelligence Artificial neural networks Autonomous cars Backpropagation Blackbox Brain Classification Clinical medicine Collision avoidance Coronaviruses COVID-19 Decision making Deep learning Diagnosis Diagnostic Imaging Disease Explainable artificial intelligence Features Humans Image classification Interpretable AI Learning algorithms Machine Learning Medical diagnosis Medical electronics Medical imaging Neural networks Neural Networks, Computer Patients Predictions Predictive models Subject specialists Supervised learning Surveys |
title | Survey of explainable artificial intelligence techniques for biomedical imaging with deep neural networks |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-19T11%3A13%3A37IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_osti_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Survey%20of%20explainable%20artificial%20intelligence%20techniques%20for%20biomedical%20imaging%20with%20deep%20neural%20networks&rft.jtitle=Computers%20in%20biology%20and%20medicine&rft.au=Nazir,%20Sajid&rft.date=2023-04&rft.volume=156&rft.issue=C&rft.spage=106668&rft.epage=106668&rft.pages=106668-106668&rft.artnum=106668&rft.issn=0010-4825&rft.eissn=1879-0534&rft_id=info:doi/10.1016/j.compbiomed.2023.106668&rft_dat=%3Cproquest_osti_%3E2783493512%3C/proquest_osti_%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2785188568&rft_id=info:pmid/36863192&rft_els_id=S0010482523001336&rfr_iscdi=true |