Machine Learning Interpretability: A Survey on Methods and Metrics
Machine learning systems are becoming increasingly ubiquitous. These systems’s adoption has been expanding, accelerating the shift towards a more algorithmic society, meaning that algorithmically informed decisions have greater potential for significant social impact. However, most of these accurate...
Gespeichert in:
Veröffentlicht in: | Electronics (Basel) 2019-08, Vol.8 (8), p.832 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | 8 |
container_start_page | 832 |
container_title | Electronics (Basel) |
container_volume | 8 |
creator | Carvalho, Diogo V. Pereira, Eduardo M. Cardoso, Jaime S. |
description | Machine learning systems are becoming increasingly ubiquitous. These systems’s adoption has been expanding, accelerating the shift towards a more algorithmic society, meaning that algorithmically informed decisions have greater potential for significant social impact. However, most of these accurate decision support systems remain complex black boxes, meaning their internal logic and inner workings are hidden to the user and even experts cannot fully understand the rationale behind their predictions. Moreover, new regulations and highly regulated domains have made the audit and verifiability of decisions mandatory, increasing the demand for the ability to question, understand, and trust machine learning systems, for which interpretability is indispensable. The research community has recognized this interpretability problem and focused on developing both interpretable models and explanation methods over the past few years. However, the emergence of these methods shows there is no consensus on how to assess the explanation quality. Which are the most suitable metrics to assess the quality of an explanation? The aim of this article is to provide a review of the current state of the research field on machine learning interpretability while focusing on the societal impact and on the developed methods and metrics. Furthermore, a complete literature review is presented in order to identify future directions of work on this field. |
doi_str_mv | 10.3390/electronics8080832 |
format | Article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2548382204</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2548382204</sourcerecordid><originalsourceid>FETCH-LOGICAL-c385t-3c05d868b23cea9a7ac7db6880c50e09274a9fe57517f37dca1683964e775a1a3</originalsourceid><addsrcrecordid>eNplUMFKAzEUDKJgqf0BTwHPq9m8zSbxVovWwhYP6nl5zb61W2q2JqnQv3dLPQjOHGYOwwwMY9e5uAWw4o625FLofeeiEQNBnrGRFNpmVlp5_sdfskmMGzHA5mBAjNjDEt2688QrwuA7_8EXPlHYBUq46rZdOtzzKX_dh2868N7zJaV130SOvjn6MGxesYsWt5Emvzpm70-Pb7PnrHqZL2bTKnNgVMrACdWY0qwkOEKLGp1uVqUxwilBwkpdoG1JaZXrFnTjMC8N2LIgrRXmCGN2c-rdhf5rTzHVm34f_DBZS1UYMFKKYkjJU8qFPsZAbb0L3SeGQ52L-nhX_f8u-AHx01_W</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2548382204</pqid></control><display><type>article</type><title>Machine Learning Interpretability: A Survey on Methods and Metrics</title><source>MDPI - Multidisciplinary Digital Publishing Institute</source><source>Elektronische Zeitschriftenbibliothek - Frei zugängliche E-Journals</source><creator>Carvalho, Diogo V. ; Pereira, Eduardo M. ; Cardoso, Jaime S.</creator><creatorcontrib>Carvalho, Diogo V. ; Pereira, Eduardo M. ; Cardoso, Jaime S.</creatorcontrib><description>Machine learning systems are becoming increasingly ubiquitous. These systems’s adoption has been expanding, accelerating the shift towards a more algorithmic society, meaning that algorithmically informed decisions have greater potential for significant social impact. However, most of these accurate decision support systems remain complex black boxes, meaning their internal logic and inner workings are hidden to the user and even experts cannot fully understand the rationale behind their predictions. Moreover, new regulations and highly regulated domains have made the audit and verifiability of decisions mandatory, increasing the demand for the ability to question, understand, and trust machine learning systems, for which interpretability is indispensable. The research community has recognized this interpretability problem and focused on developing both interpretable models and explanation methods over the past few years. However, the emergence of these methods shows there is no consensus on how to assess the explanation quality. Which are the most suitable metrics to assess the quality of an explanation? The aim of this article is to provide a review of the current state of the research field on machine learning interpretability while focusing on the societal impact and on the developed methods and metrics. Furthermore, a complete literature review is presented in order to identify future directions of work on this field.</description><identifier>ISSN: 2079-9292</identifier><identifier>EISSN: 2079-9292</identifier><identifier>DOI: 10.3390/electronics8080832</identifier><language>eng</language><publisher>Basel: MDPI AG</publisher><subject>Accountability ; Algorithms ; Artificial intelligence ; Decision support systems ; Literature reviews ; Machine learning ; Neural networks ; Public policy ; Quality assessment ; Recommender systems ; Society ; Transparency</subject><ispartof>Electronics (Basel), 2019-08, Vol.8 (8), p.832</ispartof><rights>2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c385t-3c05d868b23cea9a7ac7db6880c50e09274a9fe57517f37dca1683964e775a1a3</citedby><cites>FETCH-LOGICAL-c385t-3c05d868b23cea9a7ac7db6880c50e09274a9fe57517f37dca1683964e775a1a3</cites><orcidid>0000-0002-2349-4117 ; 0000-0002-3760-2473</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,776,780,27901,27902</link.rule.ids></links><search><creatorcontrib>Carvalho, Diogo V.</creatorcontrib><creatorcontrib>Pereira, Eduardo M.</creatorcontrib><creatorcontrib>Cardoso, Jaime S.</creatorcontrib><title>Machine Learning Interpretability: A Survey on Methods and Metrics</title><title>Electronics (Basel)</title><description>Machine learning systems are becoming increasingly ubiquitous. These systems’s adoption has been expanding, accelerating the shift towards a more algorithmic society, meaning that algorithmically informed decisions have greater potential for significant social impact. However, most of these accurate decision support systems remain complex black boxes, meaning their internal logic and inner workings are hidden to the user and even experts cannot fully understand the rationale behind their predictions. Moreover, new regulations and highly regulated domains have made the audit and verifiability of decisions mandatory, increasing the demand for the ability to question, understand, and trust machine learning systems, for which interpretability is indispensable. The research community has recognized this interpretability problem and focused on developing both interpretable models and explanation methods over the past few years. However, the emergence of these methods shows there is no consensus on how to assess the explanation quality. Which are the most suitable metrics to assess the quality of an explanation? The aim of this article is to provide a review of the current state of the research field on machine learning interpretability while focusing on the societal impact and on the developed methods and metrics. Furthermore, a complete literature review is presented in order to identify future directions of work on this field.</description><subject>Accountability</subject><subject>Algorithms</subject><subject>Artificial intelligence</subject><subject>Decision support systems</subject><subject>Literature reviews</subject><subject>Machine learning</subject><subject>Neural networks</subject><subject>Public policy</subject><subject>Quality assessment</subject><subject>Recommender systems</subject><subject>Society</subject><subject>Transparency</subject><issn>2079-9292</issn><issn>2079-9292</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2019</creationdate><recordtype>article</recordtype><sourceid>BENPR</sourceid><recordid>eNplUMFKAzEUDKJgqf0BTwHPq9m8zSbxVovWwhYP6nl5zb61W2q2JqnQv3dLPQjOHGYOwwwMY9e5uAWw4o625FLofeeiEQNBnrGRFNpmVlp5_sdfskmMGzHA5mBAjNjDEt2688QrwuA7_8EXPlHYBUq46rZdOtzzKX_dh2868N7zJaV130SOvjn6MGxesYsWt5Emvzpm70-Pb7PnrHqZL2bTKnNgVMrACdWY0qwkOEKLGp1uVqUxwilBwkpdoG1JaZXrFnTjMC8N2LIgrRXmCGN2c-rdhf5rTzHVm34f_DBZS1UYMFKKYkjJU8qFPsZAbb0L3SeGQ52L-nhX_f8u-AHx01_W</recordid><startdate>20190801</startdate><enddate>20190801</enddate><creator>Carvalho, Diogo V.</creator><creator>Pereira, Eduardo M.</creator><creator>Cardoso, Jaime S.</creator><general>MDPI AG</general><scope>AAYXX</scope><scope>CITATION</scope><scope>7SP</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L7M</scope><scope>P5Z</scope><scope>P62</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><orcidid>https://orcid.org/0000-0002-2349-4117</orcidid><orcidid>https://orcid.org/0000-0002-3760-2473</orcidid></search><sort><creationdate>20190801</creationdate><title>Machine Learning Interpretability: A Survey on Methods and Metrics</title><author>Carvalho, Diogo V. ; Pereira, Eduardo M. ; Cardoso, Jaime S.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c385t-3c05d868b23cea9a7ac7db6880c50e09274a9fe57517f37dca1683964e775a1a3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2019</creationdate><topic>Accountability</topic><topic>Algorithms</topic><topic>Artificial intelligence</topic><topic>Decision support systems</topic><topic>Literature reviews</topic><topic>Machine learning</topic><topic>Neural networks</topic><topic>Public policy</topic><topic>Quality assessment</topic><topic>Recommender systems</topic><topic>Society</topic><topic>Transparency</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Carvalho, Diogo V.</creatorcontrib><creatorcontrib>Pereira, Eduardo M.</creatorcontrib><creatorcontrib>Cardoso, Jaime S.</creatorcontrib><collection>CrossRef</collection><collection>Electronics & Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies & Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Advanced Technologies & Aerospace Database</collection><collection>ProQuest Advanced Technologies & Aerospace Collection</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><jtitle>Electronics (Basel)</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Carvalho, Diogo V.</au><au>Pereira, Eduardo M.</au><au>Cardoso, Jaime S.</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Machine Learning Interpretability: A Survey on Methods and Metrics</atitle><jtitle>Electronics (Basel)</jtitle><date>2019-08-01</date><risdate>2019</risdate><volume>8</volume><issue>8</issue><spage>832</spage><pages>832-</pages><issn>2079-9292</issn><eissn>2079-9292</eissn><abstract>Machine learning systems are becoming increasingly ubiquitous. These systems’s adoption has been expanding, accelerating the shift towards a more algorithmic society, meaning that algorithmically informed decisions have greater potential for significant social impact. However, most of these accurate decision support systems remain complex black boxes, meaning their internal logic and inner workings are hidden to the user and even experts cannot fully understand the rationale behind their predictions. Moreover, new regulations and highly regulated domains have made the audit and verifiability of decisions mandatory, increasing the demand for the ability to question, understand, and trust machine learning systems, for which interpretability is indispensable. The research community has recognized this interpretability problem and focused on developing both interpretable models and explanation methods over the past few years. However, the emergence of these methods shows there is no consensus on how to assess the explanation quality. Which are the most suitable metrics to assess the quality of an explanation? The aim of this article is to provide a review of the current state of the research field on machine learning interpretability while focusing on the societal impact and on the developed methods and metrics. Furthermore, a complete literature review is presented in order to identify future directions of work on this field.</abstract><cop>Basel</cop><pub>MDPI AG</pub><doi>10.3390/electronics8080832</doi><orcidid>https://orcid.org/0000-0002-2349-4117</orcidid><orcidid>https://orcid.org/0000-0002-3760-2473</orcidid><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 2079-9292 |
ispartof | Electronics (Basel), 2019-08, Vol.8 (8), p.832 |
issn | 2079-9292 2079-9292 |
language | eng |
recordid | cdi_proquest_journals_2548382204 |
source | MDPI - Multidisciplinary Digital Publishing Institute; Elektronische Zeitschriftenbibliothek - Frei zugängliche E-Journals |
subjects | Accountability Algorithms Artificial intelligence Decision support systems Literature reviews Machine learning Neural networks Public policy Quality assessment Recommender systems Society Transparency |
title | Machine Learning Interpretability: A Survey on Methods and Metrics |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-29T22%3A44%3A25IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Machine%20Learning%20Interpretability:%20A%20Survey%20on%20Methods%20and%20Metrics&rft.jtitle=Electronics%20(Basel)&rft.au=Carvalho,%20Diogo%20V.&rft.date=2019-08-01&rft.volume=8&rft.issue=8&rft.spage=832&rft.pages=832-&rft.issn=2079-9292&rft.eissn=2079-9292&rft_id=info:doi/10.3390/electronics8080832&rft_dat=%3Cproquest_cross%3E2548382204%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2548382204&rft_id=info:pmid/&rfr_iscdi=true |