A Survey on the Explainability of Supervised Machine Learning
Predictions obtained by, e.g., artificial neural networks have a high accuracy but humans often perceive the models as black boxes. Insights about the decision making are mostly opaque for humans. Particularly understanding the decision making in highly sensitive areas such as healthcare or finance,...
Gespeichert in:
Veröffentlicht in: | The Journal of artificial intelligence research 2021-01, Vol.70, p.245-317 |
---|---|
Hauptverfasser: | , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 317 |
---|---|
container_issue | |
container_start_page | 245 |
container_title | The Journal of artificial intelligence research |
container_volume | 70 |
creator | Burkart, Nadia Huber, Marco F. |
description | Predictions obtained by, e.g., artificial neural networks have a high accuracy but humans often perceive the models as black boxes. Insights about the decision making are mostly opaque for humans. Particularly understanding the decision making in highly sensitive areas such as healthcare or finance, is of paramount importance. The decision-making behind the black boxes requires it to be more transparent, accountable, and understandable for humans. This survey paper provides essential definitions, an overview of the different principles and methodologies of explainable Supervised Machine Learning (SML). We conduct a state-of-the-art survey that reviews past and recent explainable SML approaches and classifies them according to the introduced definitions. Finally, we illustrate principles by means of an explanatory case study and discuss important future directions. |
doi_str_mv | 10.1613/jair.1.12228 |
format | Article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2553249126</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2553249126</sourcerecordid><originalsourceid>FETCH-LOGICAL-c372t-8d365b479f94fed7228fd32e63b0da4983af65d4d10d4e2a86b9e31c73d026c53</originalsourceid><addsrcrecordid>eNpNkM1OwzAQhC0EEqVw4wEicSXBf7HjA4eqagGpiANwtpx4TR2FJNhJRd-elHLgNKuZ0e7qQ-ia4IwIwu5q40NGMkIpLU7QjGApUiVzefpvPkcXMdYYE8VpMUP3i-R1DDvYJ12bDFtIVt99Y3xrSt_4YXLdlPcQdj6CTZ5NtfUtJBswofXtxyU6c6aJcPWnc_S-Xr0tH9PNy8PTcrFJKybpkBaWibzkUjnFHVg5vecsoyBYia3hqmDGidxyS7DlQE0hSgWMVJJZTEWVszm6Oe7tQ_c1Qhx03Y2hnU5qmueMckWomFq3x1YVuhgDON0H_2nCXhOsD4D0AZAm-hcQ-wFq8lhK</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2553249126</pqid></control><display><type>article</type><title>A Survey on the Explainability of Supervised Machine Learning</title><source>DOAJ Directory of Open Access Journals</source><source>EZB-FREE-00999 freely available EZB journals</source><source>Free E- Journals</source><creator>Burkart, Nadia ; Huber, Marco F.</creator><creatorcontrib>Burkart, Nadia ; Huber, Marco F.</creatorcontrib><description>Predictions obtained by, e.g., artificial neural networks have a high accuracy but humans often perceive the models as black boxes. Insights about the decision making are mostly opaque for humans. Particularly understanding the decision making in highly sensitive areas such as healthcare or finance, is of paramount importance. The decision-making behind the black boxes requires it to be more transparent, accountable, and understandable for humans. This survey paper provides essential definitions, an overview of the different principles and methodologies of explainable Supervised Machine Learning (SML). We conduct a state-of-the-art survey that reviews past and recent explainable SML approaches and classifies them according to the introduced definitions. Finally, we illustrate principles by means of an explanatory case study and discuss important future directions.</description><identifier>ISSN: 1076-9757</identifier><identifier>EISSN: 1076-9757</identifier><identifier>EISSN: 1943-5037</identifier><identifier>DOI: 10.1613/jair.1.12228</identifier><language>eng</language><publisher>San Francisco: AI Access Foundation</publisher><subject>Artificial intelligence ; Artificial neural networks ; Decision making ; Machine learning ; Model accuracy ; Principles ; State-of-the-art reviews</subject><ispartof>The Journal of artificial intelligence research, 2021-01, Vol.70, p.245-317</ispartof><rights>2021. Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the associated terms available at https://www.jair.org/index.php/jair/about</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c372t-8d365b479f94fed7228fd32e63b0da4983af65d4d10d4e2a86b9e31c73d026c53</citedby></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,780,784,864,27924,27925</link.rule.ids></links><search><creatorcontrib>Burkart, Nadia</creatorcontrib><creatorcontrib>Huber, Marco F.</creatorcontrib><title>A Survey on the Explainability of Supervised Machine Learning</title><title>The Journal of artificial intelligence research</title><description>Predictions obtained by, e.g., artificial neural networks have a high accuracy but humans often perceive the models as black boxes. Insights about the decision making are mostly opaque for humans. Particularly understanding the decision making in highly sensitive areas such as healthcare or finance, is of paramount importance. The decision-making behind the black boxes requires it to be more transparent, accountable, and understandable for humans. This survey paper provides essential definitions, an overview of the different principles and methodologies of explainable Supervised Machine Learning (SML). We conduct a state-of-the-art survey that reviews past and recent explainable SML approaches and classifies them according to the introduced definitions. Finally, we illustrate principles by means of an explanatory case study and discuss important future directions.</description><subject>Artificial intelligence</subject><subject>Artificial neural networks</subject><subject>Decision making</subject><subject>Machine learning</subject><subject>Model accuracy</subject><subject>Principles</subject><subject>State-of-the-art reviews</subject><issn>1076-9757</issn><issn>1076-9757</issn><issn>1943-5037</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GNUQQ</sourceid><recordid>eNpNkM1OwzAQhC0EEqVw4wEicSXBf7HjA4eqagGpiANwtpx4TR2FJNhJRd-elHLgNKuZ0e7qQ-ia4IwIwu5q40NGMkIpLU7QjGApUiVzefpvPkcXMdYYE8VpMUP3i-R1DDvYJ12bDFtIVt99Y3xrSt_4YXLdlPcQdj6CTZ5NtfUtJBswofXtxyU6c6aJcPWnc_S-Xr0tH9PNy8PTcrFJKybpkBaWibzkUjnFHVg5vecsoyBYia3hqmDGidxyS7DlQE0hSgWMVJJZTEWVszm6Oe7tQ_c1Qhx03Y2hnU5qmueMckWomFq3x1YVuhgDON0H_2nCXhOsD4D0AZAm-hcQ-wFq8lhK</recordid><startdate>20210119</startdate><enddate>20210119</enddate><creator>Burkart, Nadia</creator><creator>Huber, Marco F.</creator><general>AI Access Foundation</general><scope>AAYXX</scope><scope>CITATION</scope><scope>8FE</scope><scope>8FG</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>COVID</scope><scope>DWQXO</scope><scope>GNUQQ</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K7-</scope><scope>P62</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope></search><sort><creationdate>20210119</creationdate><title>A Survey on the Explainability of Supervised Machine Learning</title><author>Burkart, Nadia ; Huber, Marco F.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c372t-8d365b479f94fed7228fd32e63b0da4983af65d4d10d4e2a86b9e31c73d026c53</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Artificial intelligence</topic><topic>Artificial neural networks</topic><topic>Decision making</topic><topic>Machine learning</topic><topic>Model accuracy</topic><topic>Principles</topic><topic>State-of-the-art reviews</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Burkart, Nadia</creatorcontrib><creatorcontrib>Huber, Marco F.</creatorcontrib><collection>CrossRef</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies & Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>Coronavirus Research Database</collection><collection>ProQuest Central Korea</collection><collection>ProQuest Central Student</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Computer Science Collection</collection><collection>Computer Science Database</collection><collection>ProQuest Advanced Technologies & Aerospace Collection</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><jtitle>The Journal of artificial intelligence research</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Burkart, Nadia</au><au>Huber, Marco F.</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>A Survey on the Explainability of Supervised Machine Learning</atitle><jtitle>The Journal of artificial intelligence research</jtitle><date>2021-01-19</date><risdate>2021</risdate><volume>70</volume><spage>245</spage><epage>317</epage><pages>245-317</pages><issn>1076-9757</issn><eissn>1076-9757</eissn><eissn>1943-5037</eissn><abstract>Predictions obtained by, e.g., artificial neural networks have a high accuracy but humans often perceive the models as black boxes. Insights about the decision making are mostly opaque for humans. Particularly understanding the decision making in highly sensitive areas such as healthcare or finance, is of paramount importance. The decision-making behind the black boxes requires it to be more transparent, accountable, and understandable for humans. This survey paper provides essential definitions, an overview of the different principles and methodologies of explainable Supervised Machine Learning (SML). We conduct a state-of-the-art survey that reviews past and recent explainable SML approaches and classifies them according to the introduced definitions. Finally, we illustrate principles by means of an explanatory case study and discuss important future directions.</abstract><cop>San Francisco</cop><pub>AI Access Foundation</pub><doi>10.1613/jair.1.12228</doi><tpages>73</tpages><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 1076-9757 |
ispartof | The Journal of artificial intelligence research, 2021-01, Vol.70, p.245-317 |
issn | 1076-9757 1076-9757 1943-5037 |
language | eng |
recordid | cdi_proquest_journals_2553249126 |
source | DOAJ Directory of Open Access Journals; EZB-FREE-00999 freely available EZB journals; Free E- Journals |
subjects | Artificial intelligence Artificial neural networks Decision making Machine learning Model accuracy Principles State-of-the-art reviews |
title | A Survey on the Explainability of Supervised Machine Learning |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-05T02%3A36%3A33IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=A%20Survey%20on%20the%20Explainability%20of%20Supervised%20Machine%20Learning&rft.jtitle=The%20Journal%20of%20artificial%20intelligence%20research&rft.au=Burkart,%20Nadia&rft.date=2021-01-19&rft.volume=70&rft.spage=245&rft.epage=317&rft.pages=245-317&rft.issn=1076-9757&rft.eissn=1076-9757&rft_id=info:doi/10.1613/jair.1.12228&rft_dat=%3Cproquest_cross%3E2553249126%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2553249126&rft_id=info:pmid/&rfr_iscdi=true |