Human-in-the-loop Extraction of Interpretable Concepts in Deep Learning Models
The interpretation of deep neural networks (DNNs) has become a key topic as more and more people apply them to solve various problems and making critical decisions. Concept-based explanations have recently become a popular approach for post-hoc interpretation of DNNs. However, identifying human-unde...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on visualization and computer graphics 2022-01, Vol.28 (1), p.780-790 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 790 |
---|---|
container_issue | 1 |
container_start_page | 780 |
container_title | IEEE transactions on visualization and computer graphics |
container_volume | 28 |
creator | Zhao, Zhenge Xu, Panpan Scheidegger, Carlos Ren, Liu |
description | The interpretation of deep neural networks (DNNs) has become a key topic as more and more people apply them to solve various problems and making critical decisions. Concept-based explanations have recently become a popular approach for post-hoc interpretation of DNNs. However, identifying human-understandable visual concepts that affect model decisions is a challenging task that is not easily addressed with automatic approaches. We present a novel human-in-the-Ioop approach to generate user-defined concepts for model interpretation and diagnostics. Central to our proposal is the use of active learning, where human knowledge and feedback are combined to train a concept extractor with very little human labeling effort. We integrate this process into an interactive system, ConceptExtract. Through two case studies, we show how our approach helps analyze model behavior and extract human-friendly concepts for different machine learning tasks and datasets and how to use these concepts to understand the predictions, compare model performance and make suggestions for model refinement. Quantitative experiments show that our active learning approach can accurately extract meaningful visual concepts. More importantly, by identifying visual concepts that negatively affect model performance, we develop the corresponding data augmentation strategy that consistently improves model performance. |
doi_str_mv | 10.1109/TVCG.2021.3114837 |
format | Article |
fullrecord | <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_proquest_journals_2613370868</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9552218</ieee_id><sourcerecordid>2578155388</sourcerecordid><originalsourceid>FETCH-LOGICAL-c349t-232fdc9d5dba536a1f1c6f21b949c7592bc87b43ef6fad427eba41e34f9d2f153</originalsourceid><addsrcrecordid>eNpdkE2LFDEQhoMo7of-ABGkwYuXHlP5zlFm192FUS-r15BOV7SXnqRNukH_vT3M7B48VUE970vxEPIG6AaA2o_3P7Y3G0YZbDiAMFw_I-dgBbRUUvV83anWLVNMnZGLWh8oBSGMfUnOuJBGU6XOydfbZe9TO6R2_oXtmPPUXP-Ziw_zkFOTY3OXZixTwdl3IzbbnAJOc22G1FwhTs0OfUlD-tl8yT2O9RV5Ef1Y8fVpXpLvn6_vt7ft7tvN3fbTrg1c2LllnMU-2F72nZdceYgQVGTQWWGDlpZ1wehOcIwq-l4wjZ0XgFxE27MIkl-SD8feqeTfC9bZ7YcacBx9wrxUx6Q2ICU3ZkXf_4c-5KWk9TvHFHCuqVEHCo5UKLnWgtFNZdj78tcBdQfZ7iDbHWS7k-w18-7UvHR77J8Sj3ZX4O0RGBDx6WylZAwM_wekC4Jm</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2613370868</pqid></control><display><type>article</type><title>Human-in-the-loop Extraction of Interpretable Concepts in Deep Learning Models</title><source>IEEE Electronic Library (IEL)</source><creator>Zhao, Zhenge ; Xu, Panpan ; Scheidegger, Carlos ; Ren, Liu</creator><creatorcontrib>Zhao, Zhenge ; Xu, Panpan ; Scheidegger, Carlos ; Ren, Liu</creatorcontrib><description>The interpretation of deep neural networks (DNNs) has become a key topic as more and more people apply them to solve various problems and making critical decisions. Concept-based explanations have recently become a popular approach for post-hoc interpretation of DNNs. However, identifying human-understandable visual concepts that affect model decisions is a challenging task that is not easily addressed with automatic approaches. We present a novel human-in-the-Ioop approach to generate user-defined concepts for model interpretation and diagnostics. Central to our proposal is the use of active learning, where human knowledge and feedback are combined to train a concept extractor with very little human labeling effort. We integrate this process into an interactive system, ConceptExtract. Through two case studies, we show how our approach helps analyze model behavior and extract human-friendly concepts for different machine learning tasks and datasets and how to use these concepts to understand the predictions, compare model performance and make suggestions for model refinement. Quantitative experiments show that our active learning approach can accurately extract meaningful visual concepts. More importantly, by identifying visual concepts that negatively affect model performance, we develop the corresponding data augmentation strategy that consistently improves model performance.</description><identifier>ISSN: 1077-2626</identifier><identifier>EISSN: 1941-0506</identifier><identifier>DOI: 10.1109/TVCG.2021.3114837</identifier><identifier>PMID: 34587066</identifier><identifier>CODEN: ITVGEA</identifier><language>eng</language><publisher>United States: IEEE</publisher><subject>Active learning ; Analytical models ; Artificial neural networks ; Cognitive tasks ; Computational modeling ; Computer Graphics ; Data models ; Decisions ; Deep Learning ; Deep Neural Network ; Explainable AI ; Humans ; Interactive systems ; Machine Learning ; Model Interpretation ; Neural Networks, Computer ; Predictive models ; Task analysis ; Visual Data Exploration ; Visualization</subject><ispartof>IEEE transactions on visualization and computer graphics, 2022-01, Vol.28 (1), p.780-790</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c349t-232fdc9d5dba536a1f1c6f21b949c7592bc87b43ef6fad427eba41e34f9d2f153</citedby><cites>FETCH-LOGICAL-c349t-232fdc9d5dba536a1f1c6f21b949c7592bc87b43ef6fad427eba41e34f9d2f153</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9552218$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,776,780,792,27903,27904,54737</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/9552218$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/34587066$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Zhao, Zhenge</creatorcontrib><creatorcontrib>Xu, Panpan</creatorcontrib><creatorcontrib>Scheidegger, Carlos</creatorcontrib><creatorcontrib>Ren, Liu</creatorcontrib><title>Human-in-the-loop Extraction of Interpretable Concepts in Deep Learning Models</title><title>IEEE transactions on visualization and computer graphics</title><addtitle>TVCG</addtitle><addtitle>IEEE Trans Vis Comput Graph</addtitle><description>The interpretation of deep neural networks (DNNs) has become a key topic as more and more people apply them to solve various problems and making critical decisions. Concept-based explanations have recently become a popular approach for post-hoc interpretation of DNNs. However, identifying human-understandable visual concepts that affect model decisions is a challenging task that is not easily addressed with automatic approaches. We present a novel human-in-the-Ioop approach to generate user-defined concepts for model interpretation and diagnostics. Central to our proposal is the use of active learning, where human knowledge and feedback are combined to train a concept extractor with very little human labeling effort. We integrate this process into an interactive system, ConceptExtract. Through two case studies, we show how our approach helps analyze model behavior and extract human-friendly concepts for different machine learning tasks and datasets and how to use these concepts to understand the predictions, compare model performance and make suggestions for model refinement. Quantitative experiments show that our active learning approach can accurately extract meaningful visual concepts. More importantly, by identifying visual concepts that negatively affect model performance, we develop the corresponding data augmentation strategy that consistently improves model performance.</description><subject>Active learning</subject><subject>Analytical models</subject><subject>Artificial neural networks</subject><subject>Cognitive tasks</subject><subject>Computational modeling</subject><subject>Computer Graphics</subject><subject>Data models</subject><subject>Decisions</subject><subject>Deep Learning</subject><subject>Deep Neural Network</subject><subject>Explainable AI</subject><subject>Humans</subject><subject>Interactive systems</subject><subject>Machine Learning</subject><subject>Model Interpretation</subject><subject>Neural Networks, Computer</subject><subject>Predictive models</subject><subject>Task analysis</subject><subject>Visual Data Exploration</subject><subject>Visualization</subject><issn>1077-2626</issn><issn>1941-0506</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><sourceid>EIF</sourceid><recordid>eNpdkE2LFDEQhoMo7of-ABGkwYuXHlP5zlFm192FUS-r15BOV7SXnqRNukH_vT3M7B48VUE970vxEPIG6AaA2o_3P7Y3G0YZbDiAMFw_I-dgBbRUUvV83anWLVNMnZGLWh8oBSGMfUnOuJBGU6XOydfbZe9TO6R2_oXtmPPUXP-Ziw_zkFOTY3OXZixTwdl3IzbbnAJOc22G1FwhTs0OfUlD-tl8yT2O9RV5Ef1Y8fVpXpLvn6_vt7ft7tvN3fbTrg1c2LllnMU-2F72nZdceYgQVGTQWWGDlpZ1wehOcIwq-l4wjZ0XgFxE27MIkl-SD8feqeTfC9bZ7YcacBx9wrxUx6Q2ICU3ZkXf_4c-5KWk9TvHFHCuqVEHCo5UKLnWgtFNZdj78tcBdQfZ7iDbHWS7k-w18-7UvHR77J8Sj3ZX4O0RGBDx6WylZAwM_wekC4Jm</recordid><startdate>202201</startdate><enddate>202201</enddate><creator>Zhao, Zhenge</creator><creator>Xu, Panpan</creator><creator>Scheidegger, Carlos</creator><creator>Ren, Liu</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>CGR</scope><scope>CUY</scope><scope>CVF</scope><scope>ECM</scope><scope>EIF</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>7X8</scope></search><sort><creationdate>202201</creationdate><title>Human-in-the-loop Extraction of Interpretable Concepts in Deep Learning Models</title><author>Zhao, Zhenge ; Xu, Panpan ; Scheidegger, Carlos ; Ren, Liu</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c349t-232fdc9d5dba536a1f1c6f21b949c7592bc87b43ef6fad427eba41e34f9d2f153</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Active learning</topic><topic>Analytical models</topic><topic>Artificial neural networks</topic><topic>Cognitive tasks</topic><topic>Computational modeling</topic><topic>Computer Graphics</topic><topic>Data models</topic><topic>Decisions</topic><topic>Deep Learning</topic><topic>Deep Neural Network</topic><topic>Explainable AI</topic><topic>Humans</topic><topic>Interactive systems</topic><topic>Machine Learning</topic><topic>Model Interpretation</topic><topic>Neural Networks, Computer</topic><topic>Predictive models</topic><topic>Task analysis</topic><topic>Visual Data Exploration</topic><topic>Visualization</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Zhao, Zhenge</creatorcontrib><creatorcontrib>Xu, Panpan</creatorcontrib><creatorcontrib>Scheidegger, Carlos</creatorcontrib><creatorcontrib>Ren, Liu</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>Medline</collection><collection>MEDLINE</collection><collection>MEDLINE (Ovid)</collection><collection>MEDLINE</collection><collection>MEDLINE</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics & Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>MEDLINE - Academic</collection><jtitle>IEEE transactions on visualization and computer graphics</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Zhao, Zhenge</au><au>Xu, Panpan</au><au>Scheidegger, Carlos</au><au>Ren, Liu</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Human-in-the-loop Extraction of Interpretable Concepts in Deep Learning Models</atitle><jtitle>IEEE transactions on visualization and computer graphics</jtitle><stitle>TVCG</stitle><addtitle>IEEE Trans Vis Comput Graph</addtitle><date>2022-01</date><risdate>2022</risdate><volume>28</volume><issue>1</issue><spage>780</spage><epage>790</epage><pages>780-790</pages><issn>1077-2626</issn><eissn>1941-0506</eissn><coden>ITVGEA</coden><abstract>The interpretation of deep neural networks (DNNs) has become a key topic as more and more people apply them to solve various problems and making critical decisions. Concept-based explanations have recently become a popular approach for post-hoc interpretation of DNNs. However, identifying human-understandable visual concepts that affect model decisions is a challenging task that is not easily addressed with automatic approaches. We present a novel human-in-the-Ioop approach to generate user-defined concepts for model interpretation and diagnostics. Central to our proposal is the use of active learning, where human knowledge and feedback are combined to train a concept extractor with very little human labeling effort. We integrate this process into an interactive system, ConceptExtract. Through two case studies, we show how our approach helps analyze model behavior and extract human-friendly concepts for different machine learning tasks and datasets and how to use these concepts to understand the predictions, compare model performance and make suggestions for model refinement. Quantitative experiments show that our active learning approach can accurately extract meaningful visual concepts. More importantly, by identifying visual concepts that negatively affect model performance, we develop the corresponding data augmentation strategy that consistently improves model performance.</abstract><cop>United States</cop><pub>IEEE</pub><pmid>34587066</pmid><doi>10.1109/TVCG.2021.3114837</doi><tpages>11</tpages></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | ISSN: 1077-2626 |
ispartof | IEEE transactions on visualization and computer graphics, 2022-01, Vol.28 (1), p.780-790 |
issn | 1077-2626 1941-0506 |
language | eng |
recordid | cdi_proquest_journals_2613370868 |
source | IEEE Electronic Library (IEL) |
subjects | Active learning Analytical models Artificial neural networks Cognitive tasks Computational modeling Computer Graphics Data models Decisions Deep Learning Deep Neural Network Explainable AI Humans Interactive systems Machine Learning Model Interpretation Neural Networks, Computer Predictive models Task analysis Visual Data Exploration Visualization |
title | Human-in-the-loop Extraction of Interpretable Concepts in Deep Learning Models |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-21T12%3A34%3A03IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Human-in-the-loop%20Extraction%20of%20Interpretable%20Concepts%20in%20Deep%20Learning%20Models&rft.jtitle=IEEE%20transactions%20on%20visualization%20and%20computer%20graphics&rft.au=Zhao,%20Zhenge&rft.date=2022-01&rft.volume=28&rft.issue=1&rft.spage=780&rft.epage=790&rft.pages=780-790&rft.issn=1077-2626&rft.eissn=1941-0506&rft.coden=ITVGEA&rft_id=info:doi/10.1109/TVCG.2021.3114837&rft_dat=%3Cproquest_RIE%3E2578155388%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2613370868&rft_id=info:pmid/34587066&rft_ieee_id=9552218&rfr_iscdi=true |