DeepVID: Deep Visual Interpretation and Diagnosis for Image Classifiers via Knowledge Distillation
Deep Neural Networks (DNNs) have been extensively used in multiple disciplines due to their superior performance. However, in most cases, DNNs are considered as black-boxes and the interpretation of their internal working mechanism is usually challenging. Given that model trust is often built on the...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on visualization and computer graphics 2019-06, Vol.25 (6), p.2168-2180 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 2180 |
---|---|
container_issue | 6 |
container_start_page | 2168 |
container_title | IEEE transactions on visualization and computer graphics |
container_volume | 25 |
creator | Wang, Junpeng Gou, Liang Zhang, Wei Yang, Hao Shen, Han-Wei |
description | Deep Neural Networks (DNNs) have been extensively used in multiple disciplines due to their superior performance. However, in most cases, DNNs are considered as black-boxes and the interpretation of their internal working mechanism is usually challenging. Given that model trust is often built on the understanding of how a model works, the interpretation of DNNs becomes more important, especially in safety-critical applications (e.g., medical diagnosis, autonomous driving). In this paper, we propose DeepVID, a Deep learning approach to Visually Interpret and Diagnose DNN models, especially image classifiers. In detail, we train a small locally-faithful model to mimic the behavior of an original cumbersome DNN around a particular data instance of interest, and the local model is sufficiently simple such that it can be visually interpreted (e.g., a linear model). Knowledge distillation is used to transfer the knowledge from the cumbersome DNN to the small model, and a deep generative model (i.e., variational auto-encoder) is used to generate neighbors around the instance of interest. Those neighbors, which come with small feature variances and semantic meanings, can effectively probe the DNN's behaviors around the interested instance and help the small model to learn those behaviors. Through comprehensive evaluations, as well as case studies conducted together with deep learning experts, we validate the effectiveness of DeepVID. |
doi_str_mv | 10.1109/TVCG.2019.2903943 |
format | Article |
fullrecord | <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_ieee_primary_8667661</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>8667661</ieee_id><sourcerecordid>2220123310</sourcerecordid><originalsourceid>FETCH-LOGICAL-c349t-a687a3a3c685550f3a4e8c5dc90645aa15d4c678b5b1a868a03364b296056d453</originalsourceid><addsrcrecordid>eNpdkU9P3DAQxa2qqFDaD4CQKktcesl2_Dd2b2i3hRVIvdC9Wk4yQUbZZGsnIL49TnfhwGmeNL83Gr1HyBmDBWNgf9xtllcLDswuuAVhpfhATpiVrAAF-mPWUJYF11wfk88pPQAwKY39RI4FGMs5YyekWiHuNuvVTzoLuglp8h1d9yPGXcTRj2Hoqe8bugr-vh9SSLQdIl1v_T3SZedTCm3AmOhj8PSmH546bPJmFdIYuu6__Qs5an2X8OthnpK_v3_dLa-L2z9X6-XlbVELacfCa1N64UWtjVIKWuElmlo1tQUtlfdMNbLWpalUxbzRxoMQWlbcalC6kUqcku_7u7s4_JswjW4bUo35ix6HKTmeo1FGA5_Ri3fowzDFPn_nOM-BciEYZIrtqToOKUVs3S6GrY_PjoGbC3BzAW4uwB0KyJ5vh8tTtcXmzfGaeAbO90BAxLe10brUmokXXQqIHQ</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2220123310</pqid></control><display><type>article</type><title>DeepVID: Deep Visual Interpretation and Diagnosis for Image Classifiers via Knowledge Distillation</title><source>IEEE Electronic Library (IEL)</source><creator>Wang, Junpeng ; Gou, Liang ; Zhang, Wei ; Yang, Hao ; Shen, Han-Wei</creator><creatorcontrib>Wang, Junpeng ; Gou, Liang ; Zhang, Wei ; Yang, Hao ; Shen, Han-Wei</creatorcontrib><description>Deep Neural Networks (DNNs) have been extensively used in multiple disciplines due to their superior performance. However, in most cases, DNNs are considered as black-boxes and the interpretation of their internal working mechanism is usually challenging. Given that model trust is often built on the understanding of how a model works, the interpretation of DNNs becomes more important, especially in safety-critical applications (e.g., medical diagnosis, autonomous driving). In this paper, we propose DeepVID, a Deep learning approach to Visually Interpret and Diagnose DNN models, especially image classifiers. In detail, we train a small locally-faithful model to mimic the behavior of an original cumbersome DNN around a particular data instance of interest, and the local model is sufficiently simple such that it can be visually interpreted (e.g., a linear model). Knowledge distillation is used to transfer the knowledge from the cumbersome DNN to the small model, and a deep generative model (i.e., variational auto-encoder) is used to generate neighbors around the instance of interest. Those neighbors, which come with small feature variances and semantic meanings, can effectively probe the DNN's behaviors around the interested instance and help the small model to learn those behaviors. Through comprehensive evaluations, as well as case studies conducted together with deep learning experts, we validate the effectiveness of DeepVID.</description><identifier>ISSN: 1077-2626</identifier><identifier>EISSN: 1941-0506</identifier><identifier>DOI: 10.1109/TVCG.2019.2903943</identifier><identifier>PMID: 30892211</identifier><identifier>CODEN: ITVGEA</identifier><language>eng</language><publisher>United States: IEEE</publisher><subject>Analytical models ; Artificial neural networks ; Classifiers ; Data models ; Deep learning ; Deep neural networks ; Diagnosis ; Distillation ; generative model ; knowledge distillation ; Knowledge management ; Machine learning ; Medical imaging ; model interpretation ; Neural networks ; Safety critical ; Semantics ; Training ; Visual analytics</subject><ispartof>IEEE transactions on visualization and computer graphics, 2019-06, Vol.25 (6), p.2168-2180</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2019</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c349t-a687a3a3c685550f3a4e8c5dc90645aa15d4c678b5b1a868a03364b296056d453</citedby><cites>FETCH-LOGICAL-c349t-a687a3a3c685550f3a4e8c5dc90645aa15d4c678b5b1a868a03364b296056d453</cites><orcidid>0000-0002-1130-9914</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/8667661$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>315,781,785,797,27929,27930,54763</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/8667661$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/30892211$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Wang, Junpeng</creatorcontrib><creatorcontrib>Gou, Liang</creatorcontrib><creatorcontrib>Zhang, Wei</creatorcontrib><creatorcontrib>Yang, Hao</creatorcontrib><creatorcontrib>Shen, Han-Wei</creatorcontrib><title>DeepVID: Deep Visual Interpretation and Diagnosis for Image Classifiers via Knowledge Distillation</title><title>IEEE transactions on visualization and computer graphics</title><addtitle>TVCG</addtitle><addtitle>IEEE Trans Vis Comput Graph</addtitle><description>Deep Neural Networks (DNNs) have been extensively used in multiple disciplines due to their superior performance. However, in most cases, DNNs are considered as black-boxes and the interpretation of their internal working mechanism is usually challenging. Given that model trust is often built on the understanding of how a model works, the interpretation of DNNs becomes more important, especially in safety-critical applications (e.g., medical diagnosis, autonomous driving). In this paper, we propose DeepVID, a Deep learning approach to Visually Interpret and Diagnose DNN models, especially image classifiers. In detail, we train a small locally-faithful model to mimic the behavior of an original cumbersome DNN around a particular data instance of interest, and the local model is sufficiently simple such that it can be visually interpreted (e.g., a linear model). Knowledge distillation is used to transfer the knowledge from the cumbersome DNN to the small model, and a deep generative model (i.e., variational auto-encoder) is used to generate neighbors around the instance of interest. Those neighbors, which come with small feature variances and semantic meanings, can effectively probe the DNN's behaviors around the interested instance and help the small model to learn those behaviors. Through comprehensive evaluations, as well as case studies conducted together with deep learning experts, we validate the effectiveness of DeepVID.</description><subject>Analytical models</subject><subject>Artificial neural networks</subject><subject>Classifiers</subject><subject>Data models</subject><subject>Deep learning</subject><subject>Deep neural networks</subject><subject>Diagnosis</subject><subject>Distillation</subject><subject>generative model</subject><subject>knowledge distillation</subject><subject>Knowledge management</subject><subject>Machine learning</subject><subject>Medical imaging</subject><subject>model interpretation</subject><subject>Neural networks</subject><subject>Safety critical</subject><subject>Semantics</subject><subject>Training</subject><subject>Visual analytics</subject><issn>1077-2626</issn><issn>1941-0506</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2019</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNpdkU9P3DAQxa2qqFDaD4CQKktcesl2_Dd2b2i3hRVIvdC9Wk4yQUbZZGsnIL49TnfhwGmeNL83Gr1HyBmDBWNgf9xtllcLDswuuAVhpfhATpiVrAAF-mPWUJYF11wfk88pPQAwKY39RI4FGMs5YyekWiHuNuvVTzoLuglp8h1d9yPGXcTRj2Hoqe8bugr-vh9SSLQdIl1v_T3SZedTCm3AmOhj8PSmH546bPJmFdIYuu6__Qs5an2X8OthnpK_v3_dLa-L2z9X6-XlbVELacfCa1N64UWtjVIKWuElmlo1tQUtlfdMNbLWpalUxbzRxoMQWlbcalC6kUqcku_7u7s4_JswjW4bUo35ix6HKTmeo1FGA5_Ri3fowzDFPn_nOM-BciEYZIrtqToOKUVs3S6GrY_PjoGbC3BzAW4uwB0KyJ5vh8tTtcXmzfGaeAbO90BAxLe10brUmokXXQqIHQ</recordid><startdate>20190601</startdate><enddate>20190601</enddate><creator>Wang, Junpeng</creator><creator>Gou, Liang</creator><creator>Zhang, Wei</creator><creator>Yang, Hao</creator><creator>Shen, Han-Wei</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0002-1130-9914</orcidid></search><sort><creationdate>20190601</creationdate><title>DeepVID: Deep Visual Interpretation and Diagnosis for Image Classifiers via Knowledge Distillation</title><author>Wang, Junpeng ; Gou, Liang ; Zhang, Wei ; Yang, Hao ; Shen, Han-Wei</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c349t-a687a3a3c685550f3a4e8c5dc90645aa15d4c678b5b1a868a03364b296056d453</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2019</creationdate><topic>Analytical models</topic><topic>Artificial neural networks</topic><topic>Classifiers</topic><topic>Data models</topic><topic>Deep learning</topic><topic>Deep neural networks</topic><topic>Diagnosis</topic><topic>Distillation</topic><topic>generative model</topic><topic>knowledge distillation</topic><topic>Knowledge management</topic><topic>Machine learning</topic><topic>Medical imaging</topic><topic>model interpretation</topic><topic>Neural networks</topic><topic>Safety critical</topic><topic>Semantics</topic><topic>Training</topic><topic>Visual analytics</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Wang, Junpeng</creatorcontrib><creatorcontrib>Gou, Liang</creatorcontrib><creatorcontrib>Zhang, Wei</creatorcontrib><creatorcontrib>Yang, Hao</creatorcontrib><creatorcontrib>Shen, Han-Wei</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics & Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>MEDLINE - Academic</collection><jtitle>IEEE transactions on visualization and computer graphics</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Wang, Junpeng</au><au>Gou, Liang</au><au>Zhang, Wei</au><au>Yang, Hao</au><au>Shen, Han-Wei</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>DeepVID: Deep Visual Interpretation and Diagnosis for Image Classifiers via Knowledge Distillation</atitle><jtitle>IEEE transactions on visualization and computer graphics</jtitle><stitle>TVCG</stitle><addtitle>IEEE Trans Vis Comput Graph</addtitle><date>2019-06-01</date><risdate>2019</risdate><volume>25</volume><issue>6</issue><spage>2168</spage><epage>2180</epage><pages>2168-2180</pages><issn>1077-2626</issn><eissn>1941-0506</eissn><coden>ITVGEA</coden><abstract>Deep Neural Networks (DNNs) have been extensively used in multiple disciplines due to their superior performance. However, in most cases, DNNs are considered as black-boxes and the interpretation of their internal working mechanism is usually challenging. Given that model trust is often built on the understanding of how a model works, the interpretation of DNNs becomes more important, especially in safety-critical applications (e.g., medical diagnosis, autonomous driving). In this paper, we propose DeepVID, a Deep learning approach to Visually Interpret and Diagnose DNN models, especially image classifiers. In detail, we train a small locally-faithful model to mimic the behavior of an original cumbersome DNN around a particular data instance of interest, and the local model is sufficiently simple such that it can be visually interpreted (e.g., a linear model). Knowledge distillation is used to transfer the knowledge from the cumbersome DNN to the small model, and a deep generative model (i.e., variational auto-encoder) is used to generate neighbors around the instance of interest. Those neighbors, which come with small feature variances and semantic meanings, can effectively probe the DNN's behaviors around the interested instance and help the small model to learn those behaviors. Through comprehensive evaluations, as well as case studies conducted together with deep learning experts, we validate the effectiveness of DeepVID.</abstract><cop>United States</cop><pub>IEEE</pub><pmid>30892211</pmid><doi>10.1109/TVCG.2019.2903943</doi><tpages>13</tpages><orcidid>https://orcid.org/0000-0002-1130-9914</orcidid></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | ISSN: 1077-2626 |
ispartof | IEEE transactions on visualization and computer graphics, 2019-06, Vol.25 (6), p.2168-2180 |
issn | 1077-2626 1941-0506 |
language | eng |
recordid | cdi_ieee_primary_8667661 |
source | IEEE Electronic Library (IEL) |
subjects | Analytical models Artificial neural networks Classifiers Data models Deep learning Deep neural networks Diagnosis Distillation generative model knowledge distillation Knowledge management Machine learning Medical imaging model interpretation Neural networks Safety critical Semantics Training Visual analytics |
title | DeepVID: Deep Visual Interpretation and Diagnosis for Image Classifiers via Knowledge Distillation |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-11T11%3A31%3A39IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=DeepVID:%20Deep%20Visual%20Interpretation%20and%20Diagnosis%20for%20Image%20Classifiers%20via%20Knowledge%20Distillation&rft.jtitle=IEEE%20transactions%20on%20visualization%20and%20computer%20graphics&rft.au=Wang,%20Junpeng&rft.date=2019-06-01&rft.volume=25&rft.issue=6&rft.spage=2168&rft.epage=2180&rft.pages=2168-2180&rft.issn=1077-2626&rft.eissn=1941-0506&rft.coden=ITVGEA&rft_id=info:doi/10.1109/TVCG.2019.2903943&rft_dat=%3Cproquest_RIE%3E2220123310%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2220123310&rft_id=info:pmid/30892211&rft_ieee_id=8667661&rfr_iscdi=true |