Semantics of the Black-Box: Can Knowledge Graphs Help Make Deep Learning Systems More Interpretable and Explainable?

The recent series of innovations in deep learning (DL) have shown enormous potential to impact individuals and society, both positively and negatively. DL models utilizing massive computing power and enormous datasets have significantly outperformed prior historical benchmarks on increasingly diffic...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE internet computing 2021-01, Vol.25 (1), p.51-59
Hauptverfasser: Gaur, Manas, Faldu, Keyur, Sheth, Amit
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 59
container_issue 1
container_start_page 51
container_title IEEE internet computing
container_volume 25
creator Gaur, Manas
Faldu, Keyur
Sheth, Amit
Sheth, Amit
description The recent series of innovations in deep learning (DL) have shown enormous potential to impact individuals and society, both positively and negatively. DL models utilizing massive computing power and enormous datasets have significantly outperformed prior historical benchmarks on increasingly difficult, well-defined research tasks across technology domains such as computer vision, natural language processing, and human-computer interactions. However, DL's black-box nature and over-reliance on massive amounts of data condensed into labels and dense representations pose challenges for interpretability and explainability. Furthermore, DLs have not proven their ability to effectively utilize relevant domain knowledge critical to human understanding. This aspect was missing in early data-focused approaches and necessitated knowledge-infused learning (K-iL) to incorporate computational knowledge. This article demonstrates how knowledge, provided as a knowledge graph, is incorporated into DL using K-iL. Through examples from natural language processing applications in healthcare and education, we discuss the utility of K-iL towards interpretability and explainability.
doi_str_mv 10.1109/MIC.2020.3031769
format Article
fullrecord <record><control><sourceid>webofscience_RIE</sourceid><recordid>TN_cdi_crossref_primary_10_1109_MIC_2020_3031769</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9357868</ieee_id><sourcerecordid>000621401300006</sourcerecordid><originalsourceid>FETCH-LOGICAL-c305t-27ae9da554f7dd26e5ead1eb465715ab0bf62b2a2aa9558d133a5771a1b84b63</originalsourceid><addsrcrecordid>eNqNkL9PwzAQhSMEEj93JBbvKMVnx4nDgiAUWtGKoezRJblAaOpEtlHb_55URbAy3TvpfW_4guAS-AiApzfzaTYSXPCR5BKSOD0ITiCNIOQg4XDIXKdhojkcB6fOfXLOtRZwEvgFrdD4pnSsq5n_IPbQYrkMH7rNLcvQsBfTrVuq3ok9W-w_HJtQ27M5Lok9EvVsRmhNY97ZYus8rRybd5bY1HiyvSWPRUsMTcXGm77Fxuz-u_PgqMbW0cXPPQvensZv2SScvT5Ps_tZWEqufCgSpLRCpaI6qSoRkyKsgIooVgkoLHhRx6IQKBBTpXQFUqJKEkAodFTE8izg-9nSds5ZqvPeNiu02xx4vpOWD9LynbT8R9qAXO-RNRVd7cqGTEm_2GAtFhANSvkuDm39_3bWePRNZ7Luy_gBvdqjDdEfkkqV6FjLb8o6iWU</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Semantics of the Black-Box: Can Knowledge Graphs Help Make Deep Learning Systems More Interpretable and Explainable?</title><source>IEEE Electronic Library (IEL)</source><creator>Gaur, Manas ; Faldu, Keyur ; Sheth, Amit ; Sheth, Amit</creator><creatorcontrib>Gaur, Manas ; Faldu, Keyur ; Sheth, Amit ; Sheth, Amit</creatorcontrib><description>The recent series of innovations in deep learning (DL) have shown enormous potential to impact individuals and society, both positively and negatively. DL models utilizing massive computing power and enormous datasets have significantly outperformed prior historical benchmarks on increasingly difficult, well-defined research tasks across technology domains such as computer vision, natural language processing, and human-computer interactions. However, DL's black-box nature and over-reliance on massive amounts of data condensed into labels and dense representations pose challenges for interpretability and explainability. Furthermore, DLs have not proven their ability to effectively utilize relevant domain knowledge critical to human understanding. This aspect was missing in early data-focused approaches and necessitated knowledge-infused learning (K-iL) to incorporate computational knowledge. This article demonstrates how knowledge, provided as a knowledge graph, is incorporated into DL using K-iL. Through examples from natural language processing applications in healthcare and education, we discuss the utility of K-iL towards interpretability and explainability.</description><identifier>ISSN: 1089-7801</identifier><identifier>EISSN: 1941-0131</identifier><identifier>DOI: 10.1109/MIC.2020.3031769</identifier><identifier>CODEN: IICOFX</identifier><language>eng</language><publisher>LOS ALAMITOS: IEEE</publisher><subject>Artificial intelligence ; Computational modeling ; Computer Science ; Computer Science, Software Engineering ; Computer vision ; Deep learning ; Human computer interaction ; Medical services ; Natural language processing ; Science &amp; Technology ; Semantics ; Technology</subject><ispartof>IEEE internet computing, 2021-01, Vol.25 (1), p.51-59</ispartof><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>true</woscitedreferencessubscribed><woscitedreferencescount>72</woscitedreferencescount><woscitedreferencesoriginalsourcerecordid>wos000621401300006</woscitedreferencesoriginalsourcerecordid><citedby>FETCH-LOGICAL-c305t-27ae9da554f7dd26e5ead1eb465715ab0bf62b2a2aa9558d133a5771a1b84b63</citedby><cites>FETCH-LOGICAL-c305t-27ae9da554f7dd26e5ead1eb465715ab0bf62b2a2aa9558d133a5771a1b84b63</cites><orcidid>0000-0002-0021-5293 ; 0000-0002-5411-2230 ; 0000-0002-1621-3321</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9357868$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,796,27923,27924,54757</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/9357868$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Gaur, Manas</creatorcontrib><creatorcontrib>Faldu, Keyur</creatorcontrib><creatorcontrib>Sheth, Amit</creatorcontrib><creatorcontrib>Sheth, Amit</creatorcontrib><title>Semantics of the Black-Box: Can Knowledge Graphs Help Make Deep Learning Systems More Interpretable and Explainable?</title><title>IEEE internet computing</title><addtitle>MIC</addtitle><addtitle>IEEE INTERNET COMPUT</addtitle><description>The recent series of innovations in deep learning (DL) have shown enormous potential to impact individuals and society, both positively and negatively. DL models utilizing massive computing power and enormous datasets have significantly outperformed prior historical benchmarks on increasingly difficult, well-defined research tasks across technology domains such as computer vision, natural language processing, and human-computer interactions. However, DL's black-box nature and over-reliance on massive amounts of data condensed into labels and dense representations pose challenges for interpretability and explainability. Furthermore, DLs have not proven their ability to effectively utilize relevant domain knowledge critical to human understanding. This aspect was missing in early data-focused approaches and necessitated knowledge-infused learning (K-iL) to incorporate computational knowledge. This article demonstrates how knowledge, provided as a knowledge graph, is incorporated into DL using K-iL. Through examples from natural language processing applications in healthcare and education, we discuss the utility of K-iL towards interpretability and explainability.</description><subject>Artificial intelligence</subject><subject>Computational modeling</subject><subject>Computer Science</subject><subject>Computer Science, Software Engineering</subject><subject>Computer vision</subject><subject>Deep learning</subject><subject>Human computer interaction</subject><subject>Medical services</subject><subject>Natural language processing</subject><subject>Science &amp; Technology</subject><subject>Semantics</subject><subject>Technology</subject><issn>1089-7801</issn><issn>1941-0131</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><sourceid>HGBXW</sourceid><recordid>eNqNkL9PwzAQhSMEEj93JBbvKMVnx4nDgiAUWtGKoezRJblAaOpEtlHb_55URbAy3TvpfW_4guAS-AiApzfzaTYSXPCR5BKSOD0ITiCNIOQg4XDIXKdhojkcB6fOfXLOtRZwEvgFrdD4pnSsq5n_IPbQYrkMH7rNLcvQsBfTrVuq3ok9W-w_HJtQ27M5Lok9EvVsRmhNY97ZYus8rRybd5bY1HiyvSWPRUsMTcXGm77Fxuz-u_PgqMbW0cXPPQvensZv2SScvT5Ps_tZWEqufCgSpLRCpaI6qSoRkyKsgIooVgkoLHhRx6IQKBBTpXQFUqJKEkAodFTE8izg-9nSds5ZqvPeNiu02xx4vpOWD9LynbT8R9qAXO-RNRVd7cqGTEm_2GAtFhANSvkuDm39_3bWePRNZ7Luy_gBvdqjDdEfkkqV6FjLb8o6iWU</recordid><startdate>202101</startdate><enddate>202101</enddate><creator>Gaur, Manas</creator><creator>Faldu, Keyur</creator><creator>Sheth, Amit</creator><creator>Sheth, Amit</creator><general>IEEE</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>BLEPL</scope><scope>DTL</scope><scope>HGBXW</scope><scope>AAYXX</scope><scope>CITATION</scope><orcidid>https://orcid.org/0000-0002-0021-5293</orcidid><orcidid>https://orcid.org/0000-0002-5411-2230</orcidid><orcidid>https://orcid.org/0000-0002-1621-3321</orcidid></search><sort><creationdate>202101</creationdate><title>Semantics of the Black-Box: Can Knowledge Graphs Help Make Deep Learning Systems More Interpretable and Explainable?</title><author>Gaur, Manas ; Faldu, Keyur ; Sheth, Amit ; Sheth, Amit</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c305t-27ae9da554f7dd26e5ead1eb465715ab0bf62b2a2aa9558d133a5771a1b84b63</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Artificial intelligence</topic><topic>Computational modeling</topic><topic>Computer Science</topic><topic>Computer Science, Software Engineering</topic><topic>Computer vision</topic><topic>Deep learning</topic><topic>Human computer interaction</topic><topic>Medical services</topic><topic>Natural language processing</topic><topic>Science &amp; Technology</topic><topic>Semantics</topic><topic>Technology</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Gaur, Manas</creatorcontrib><creatorcontrib>Faldu, Keyur</creatorcontrib><creatorcontrib>Sheth, Amit</creatorcontrib><creatorcontrib>Sheth, Amit</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>Web of Science Core Collection</collection><collection>Science Citation Index Expanded</collection><collection>Web of Science - Science Citation Index Expanded - 2021</collection><collection>CrossRef</collection><jtitle>IEEE internet computing</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Gaur, Manas</au><au>Faldu, Keyur</au><au>Sheth, Amit</au><au>Sheth, Amit</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Semantics of the Black-Box: Can Knowledge Graphs Help Make Deep Learning Systems More Interpretable and Explainable?</atitle><jtitle>IEEE internet computing</jtitle><stitle>MIC</stitle><stitle>IEEE INTERNET COMPUT</stitle><date>2021-01</date><risdate>2021</risdate><volume>25</volume><issue>1</issue><spage>51</spage><epage>59</epage><pages>51-59</pages><issn>1089-7801</issn><eissn>1941-0131</eissn><coden>IICOFX</coden><abstract>The recent series of innovations in deep learning (DL) have shown enormous potential to impact individuals and society, both positively and negatively. DL models utilizing massive computing power and enormous datasets have significantly outperformed prior historical benchmarks on increasingly difficult, well-defined research tasks across technology domains such as computer vision, natural language processing, and human-computer interactions. However, DL's black-box nature and over-reliance on massive amounts of data condensed into labels and dense representations pose challenges for interpretability and explainability. Furthermore, DLs have not proven their ability to effectively utilize relevant domain knowledge critical to human understanding. This aspect was missing in early data-focused approaches and necessitated knowledge-infused learning (K-iL) to incorporate computational knowledge. This article demonstrates how knowledge, provided as a knowledge graph, is incorporated into DL using K-iL. Through examples from natural language processing applications in healthcare and education, we discuss the utility of K-iL towards interpretability and explainability.</abstract><cop>LOS ALAMITOS</cop><pub>IEEE</pub><doi>10.1109/MIC.2020.3031769</doi><tpages>9</tpages><orcidid>https://orcid.org/0000-0002-0021-5293</orcidid><orcidid>https://orcid.org/0000-0002-5411-2230</orcidid><orcidid>https://orcid.org/0000-0002-1621-3321</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 1089-7801
ispartof IEEE internet computing, 2021-01, Vol.25 (1), p.51-59
issn 1089-7801
1941-0131
language eng
recordid cdi_crossref_primary_10_1109_MIC_2020_3031769
source IEEE Electronic Library (IEL)
subjects Artificial intelligence
Computational modeling
Computer Science
Computer Science, Software Engineering
Computer vision
Deep learning
Human computer interaction
Medical services
Natural language processing
Science & Technology
Semantics
Technology
title Semantics of the Black-Box: Can Knowledge Graphs Help Make Deep Learning Systems More Interpretable and Explainable?
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-11T20%3A36%3A08IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-webofscience_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Semantics%20of%20the%20Black-Box:%20Can%20Knowledge%20Graphs%20Help%20Make%20Deep%20Learning%20Systems%20More%20Interpretable%20and%20Explainable?&rft.jtitle=IEEE%20internet%20computing&rft.au=Gaur,%20Manas&rft.date=2021-01&rft.volume=25&rft.issue=1&rft.spage=51&rft.epage=59&rft.pages=51-59&rft.issn=1089-7801&rft.eissn=1941-0131&rft.coden=IICOFX&rft_id=info:doi/10.1109/MIC.2020.3031769&rft_dat=%3Cwebofscience_RIE%3E000621401300006%3C/webofscience_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=9357868&rfr_iscdi=true