Evaluation of importance estimators in deep learning classifiers for Computed Tomography

Deep learning has shown superb performance in detecting objects and classifying images, ensuring a great promise for analyzing medical imaging. Translating the success of deep learning to medical imaging, in which doctors need to understand the underlying process, requires the capability to interpre...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2022-09
Hauptverfasser: Brocki, Lennart, Marchadour, Wistan, Maison, Jonas, Badic, Bogdan, Papadimitroulas, Panagiotis, Hatt, Mathieu, Vermet, Franck, Neo, Christopher Chung
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Brocki, Lennart
Marchadour, Wistan
Maison, Jonas
Badic, Bogdan
Papadimitroulas, Panagiotis
Hatt, Mathieu
Vermet, Franck
Neo, Christopher Chung
description Deep learning has shown superb performance in detecting objects and classifying images, ensuring a great promise for analyzing medical imaging. Translating the success of deep learning to medical imaging, in which doctors need to understand the underlying process, requires the capability to interpret and explain the prediction of neural networks. Interpretability of deep neural networks often relies on estimating the importance of input features (e.g., pixels) with respect to the outcome (e.g., class probability). However, a number of importance estimators (also known as saliency maps) have been developed and it is unclear which ones are more relevant for medical imaging applications. In the present work, we investigated the performance of several importance estimators in explaining the classification of computed tomography (CT) images by a convolutional deep network, using three distinct evaluation metrics. First, the model-centric fidelity measures a decrease in the model accuracy when certain inputs are perturbed. Second, concordance between importance scores and the expert-defined segmentation masks is measured on a pixel level by a receiver operating characteristic (ROC) curves. Third, we measure a region-wise overlap between a XRAI-based map and the segmentation mask by Dice Similarity Coefficients (DSC). Overall, two versions of SmoothGrad topped the fidelity and ROC rankings, whereas both Integrated Gradients and SmoothGrad excelled in DSC evaluation. Interestingly, there was a critical discrepancy between model-centric (fidelity) and human-centric (ROC and DSC) evaluation. Expert expectation and intuition embedded in segmentation maps does not necessarily align with how the model arrived at its prediction. Understanding this difference in interpretability would help harnessing the power of deep learning in medicine.
doi_str_mv 10.48550/arxiv.2209.15398
format Article
fullrecord <record><control><sourceid>proquest_arxiv</sourceid><recordid>TN_cdi_arxiv_primary_2209_15398</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2720666682</sourcerecordid><originalsourceid>FETCH-LOGICAL-a522-9ba649fdeea8e6667a6b5bcd1c008c530d0ab651700831dfe5f8ca56517309c43</originalsourceid><addsrcrecordid>eNotj81qwzAQhEWh0JDmAXqqoGe7smTJ8rGY9AcCvfjQm1nLUqpgW65kh-btqyQ9LbszzM6H0ENG0lxyTp7B_9pjSikp04yzUt6gFWUsS2RO6R3ahHAghFBRUM7ZCn1tj9AvMFs3YmewHSbnZxiVxjrMdoDZ-YDtiDutJ9xr8KMd91j1EII1VkfROI8rN0zLrDtcu8HtPUzfp3t0a6APevM_16h-3dbVe7L7fPuoXnYJcEqTsgWRlyamg9RCiAJEy1vVZYoQqTgjHYFW8KyIK8s6o7mRCvj5wkipcrZGj9fYC3Yz-djZn5ozfnPBj46nq2Py7meJVM3BLX6MnRpaUBKfCknZH7_kXrM</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2720666682</pqid></control><display><type>article</type><title>Evaluation of importance estimators in deep learning classifiers for Computed Tomography</title><source>arXiv.org</source><source>Free E- Journals</source><creator>Brocki, Lennart ; Marchadour, Wistan ; Maison, Jonas ; Badic, Bogdan ; Papadimitroulas, Panagiotis ; Hatt, Mathieu ; Vermet, Franck ; Neo, Christopher Chung</creator><creatorcontrib>Brocki, Lennart ; Marchadour, Wistan ; Maison, Jonas ; Badic, Bogdan ; Papadimitroulas, Panagiotis ; Hatt, Mathieu ; Vermet, Franck ; Neo, Christopher Chung</creatorcontrib><description>Deep learning has shown superb performance in detecting objects and classifying images, ensuring a great promise for analyzing medical imaging. Translating the success of deep learning to medical imaging, in which doctors need to understand the underlying process, requires the capability to interpret and explain the prediction of neural networks. Interpretability of deep neural networks often relies on estimating the importance of input features (e.g., pixels) with respect to the outcome (e.g., class probability). However, a number of importance estimators (also known as saliency maps) have been developed and it is unclear which ones are more relevant for medical imaging applications. In the present work, we investigated the performance of several importance estimators in explaining the classification of computed tomography (CT) images by a convolutional deep network, using three distinct evaluation metrics. First, the model-centric fidelity measures a decrease in the model accuracy when certain inputs are perturbed. Second, concordance between importance scores and the expert-defined segmentation masks is measured on a pixel level by a receiver operating characteristic (ROC) curves. Third, we measure a region-wise overlap between a XRAI-based map and the segmentation mask by Dice Similarity Coefficients (DSC). Overall, two versions of SmoothGrad topped the fidelity and ROC rankings, whereas both Integrated Gradients and SmoothGrad excelled in DSC evaluation. Interestingly, there was a critical discrepancy between model-centric (fidelity) and human-centric (ROC and DSC) evaluation. Expert expectation and intuition embedded in segmentation maps does not necessarily align with how the model arrived at its prediction. Understanding this difference in interpretability would help harnessing the power of deep learning in medicine.</description><identifier>EISSN: 2331-8422</identifier><identifier>DOI: 10.48550/arxiv.2209.15398</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Artificial neural networks ; Computed tomography ; Computer Science - Artificial Intelligence ; Computer Science - Computer Vision and Pattern Recognition ; Computer Science - Learning ; Deep learning ; Estimators ; Image classification ; Image segmentation ; Machine learning ; Medical imaging ; Model accuracy ; Neural networks ; Object recognition ; Pixels ; Statistics - Machine Learning ; Tomography</subject><ispartof>arXiv.org, 2022-09</ispartof><rights>2022. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,780,881,27904</link.rule.ids><backlink>$$Uhttps://doi.org/10.1007/978-3-031-15565-9_1$$DView published paper (Access to full text may be restricted)$$Hfree_for_read</backlink><backlink>$$Uhttps://doi.org/10.48550/arXiv.2209.15398$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Brocki, Lennart</creatorcontrib><creatorcontrib>Marchadour, Wistan</creatorcontrib><creatorcontrib>Maison, Jonas</creatorcontrib><creatorcontrib>Badic, Bogdan</creatorcontrib><creatorcontrib>Papadimitroulas, Panagiotis</creatorcontrib><creatorcontrib>Hatt, Mathieu</creatorcontrib><creatorcontrib>Vermet, Franck</creatorcontrib><creatorcontrib>Neo, Christopher Chung</creatorcontrib><title>Evaluation of importance estimators in deep learning classifiers for Computed Tomography</title><title>arXiv.org</title><description>Deep learning has shown superb performance in detecting objects and classifying images, ensuring a great promise for analyzing medical imaging. Translating the success of deep learning to medical imaging, in which doctors need to understand the underlying process, requires the capability to interpret and explain the prediction of neural networks. Interpretability of deep neural networks often relies on estimating the importance of input features (e.g., pixels) with respect to the outcome (e.g., class probability). However, a number of importance estimators (also known as saliency maps) have been developed and it is unclear which ones are more relevant for medical imaging applications. In the present work, we investigated the performance of several importance estimators in explaining the classification of computed tomography (CT) images by a convolutional deep network, using three distinct evaluation metrics. First, the model-centric fidelity measures a decrease in the model accuracy when certain inputs are perturbed. Second, concordance between importance scores and the expert-defined segmentation masks is measured on a pixel level by a receiver operating characteristic (ROC) curves. Third, we measure a region-wise overlap between a XRAI-based map and the segmentation mask by Dice Similarity Coefficients (DSC). Overall, two versions of SmoothGrad topped the fidelity and ROC rankings, whereas both Integrated Gradients and SmoothGrad excelled in DSC evaluation. Interestingly, there was a critical discrepancy between model-centric (fidelity) and human-centric (ROC and DSC) evaluation. Expert expectation and intuition embedded in segmentation maps does not necessarily align with how the model arrived at its prediction. Understanding this difference in interpretability would help harnessing the power of deep learning in medicine.</description><subject>Artificial neural networks</subject><subject>Computed tomography</subject><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Computer Vision and Pattern Recognition</subject><subject>Computer Science - Learning</subject><subject>Deep learning</subject><subject>Estimators</subject><subject>Image classification</subject><subject>Image segmentation</subject><subject>Machine learning</subject><subject>Medical imaging</subject><subject>Model accuracy</subject><subject>Neural networks</subject><subject>Object recognition</subject><subject>Pixels</subject><subject>Statistics - Machine Learning</subject><subject>Tomography</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GOX</sourceid><recordid>eNotj81qwzAQhEWh0JDmAXqqoGe7smTJ8rGY9AcCvfjQm1nLUqpgW65kh-btqyQ9LbszzM6H0ENG0lxyTp7B_9pjSikp04yzUt6gFWUsS2RO6R3ahHAghFBRUM7ZCn1tj9AvMFs3YmewHSbnZxiVxjrMdoDZ-YDtiDutJ9xr8KMd91j1EII1VkfROI8rN0zLrDtcu8HtPUzfp3t0a6APevM_16h-3dbVe7L7fPuoXnYJcEqTsgWRlyamg9RCiAJEy1vVZYoQqTgjHYFW8KyIK8s6o7mRCvj5wkipcrZGj9fYC3Yz-djZn5ozfnPBj46nq2Py7meJVM3BLX6MnRpaUBKfCknZH7_kXrM</recordid><startdate>20220930</startdate><enddate>20220930</enddate><creator>Brocki, Lennart</creator><creator>Marchadour, Wistan</creator><creator>Maison, Jonas</creator><creator>Badic, Bogdan</creator><creator>Papadimitroulas, Panagiotis</creator><creator>Hatt, Mathieu</creator><creator>Vermet, Franck</creator><creator>Neo, Christopher Chung</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope><scope>AKY</scope><scope>EPD</scope><scope>GOX</scope></search><sort><creationdate>20220930</creationdate><title>Evaluation of importance estimators in deep learning classifiers for Computed Tomography</title><author>Brocki, Lennart ; Marchadour, Wistan ; Maison, Jonas ; Badic, Bogdan ; Papadimitroulas, Panagiotis ; Hatt, Mathieu ; Vermet, Franck ; Neo, Christopher Chung</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a522-9ba649fdeea8e6667a6b5bcd1c008c530d0ab651700831dfe5f8ca56517309c43</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Artificial neural networks</topic><topic>Computed tomography</topic><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Computer Vision and Pattern Recognition</topic><topic>Computer Science - Learning</topic><topic>Deep learning</topic><topic>Estimators</topic><topic>Image classification</topic><topic>Image segmentation</topic><topic>Machine learning</topic><topic>Medical imaging</topic><topic>Model accuracy</topic><topic>Neural networks</topic><topic>Object recognition</topic><topic>Pixels</topic><topic>Statistics - Machine Learning</topic><topic>Tomography</topic><toplevel>online_resources</toplevel><creatorcontrib>Brocki, Lennart</creatorcontrib><creatorcontrib>Marchadour, Wistan</creatorcontrib><creatorcontrib>Maison, Jonas</creatorcontrib><creatorcontrib>Badic, Bogdan</creatorcontrib><creatorcontrib>Papadimitroulas, Panagiotis</creatorcontrib><creatorcontrib>Hatt, Mathieu</creatorcontrib><creatorcontrib>Vermet, Franck</creatorcontrib><creatorcontrib>Neo, Christopher Chung</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection><collection>arXiv Computer Science</collection><collection>arXiv Statistics</collection><collection>arXiv.org</collection><jtitle>arXiv.org</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Brocki, Lennart</au><au>Marchadour, Wistan</au><au>Maison, Jonas</au><au>Badic, Bogdan</au><au>Papadimitroulas, Panagiotis</au><au>Hatt, Mathieu</au><au>Vermet, Franck</au><au>Neo, Christopher Chung</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Evaluation of importance estimators in deep learning classifiers for Computed Tomography</atitle><jtitle>arXiv.org</jtitle><date>2022-09-30</date><risdate>2022</risdate><eissn>2331-8422</eissn><abstract>Deep learning has shown superb performance in detecting objects and classifying images, ensuring a great promise for analyzing medical imaging. Translating the success of deep learning to medical imaging, in which doctors need to understand the underlying process, requires the capability to interpret and explain the prediction of neural networks. Interpretability of deep neural networks often relies on estimating the importance of input features (e.g., pixels) with respect to the outcome (e.g., class probability). However, a number of importance estimators (also known as saliency maps) have been developed and it is unclear which ones are more relevant for medical imaging applications. In the present work, we investigated the performance of several importance estimators in explaining the classification of computed tomography (CT) images by a convolutional deep network, using three distinct evaluation metrics. First, the model-centric fidelity measures a decrease in the model accuracy when certain inputs are perturbed. Second, concordance between importance scores and the expert-defined segmentation masks is measured on a pixel level by a receiver operating characteristic (ROC) curves. Third, we measure a region-wise overlap between a XRAI-based map and the segmentation mask by Dice Similarity Coefficients (DSC). Overall, two versions of SmoothGrad topped the fidelity and ROC rankings, whereas both Integrated Gradients and SmoothGrad excelled in DSC evaluation. Interestingly, there was a critical discrepancy between model-centric (fidelity) and human-centric (ROC and DSC) evaluation. Expert expectation and intuition embedded in segmentation maps does not necessarily align with how the model arrived at its prediction. Understanding this difference in interpretability would help harnessing the power of deep learning in medicine.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><doi>10.48550/arxiv.2209.15398</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2022-09
issn 2331-8422
language eng
recordid cdi_arxiv_primary_2209_15398
source arXiv.org; Free E- Journals
subjects Artificial neural networks
Computed tomography
Computer Science - Artificial Intelligence
Computer Science - Computer Vision and Pattern Recognition
Computer Science - Learning
Deep learning
Estimators
Image classification
Image segmentation
Machine learning
Medical imaging
Model accuracy
Neural networks
Object recognition
Pixels
Statistics - Machine Learning
Tomography
title Evaluation of importance estimators in deep learning classifiers for Computed Tomography
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-26T10%3A43%3A16IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_arxiv&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Evaluation%20of%20importance%20estimators%20in%20deep%20learning%20classifiers%20for%20Computed%20Tomography&rft.jtitle=arXiv.org&rft.au=Brocki,%20Lennart&rft.date=2022-09-30&rft.eissn=2331-8422&rft_id=info:doi/10.48550/arxiv.2209.15398&rft_dat=%3Cproquest_arxiv%3E2720666682%3C/proquest_arxiv%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2720666682&rft_id=info:pmid/&rfr_iscdi=true