Interpretable Machine Learning: Moving From Mythos to Diagnostics

Despite increasing interest in the field of Interpretable Machine Learning (IML), a significant gap persists between the technical objectives targeted by researchers' methods and the high-level goals of consumers' use cases. In this work, we synthesize foundational work on IML methods and...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Chen, Valerie, Li, Jeffrey, Kim, Joon Sik, Plumb, Gregory, Talwalkar, Ameet
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Chen, Valerie
Li, Jeffrey
Kim, Joon Sik
Plumb, Gregory
Talwalkar, Ameet
description Despite increasing interest in the field of Interpretable Machine Learning (IML), a significant gap persists between the technical objectives targeted by researchers' methods and the high-level goals of consumers' use cases. In this work, we synthesize foundational work on IML methods and evaluation into an actionable taxonomy. This taxonomy serves as a tool to conceptualize the gap between researchers and consumers, illustrated by the lack of connections between its methods and use cases components. It also provides the foundation from which we describe a three-step workflow to better enable researchers and consumers to work together to discover what types of methods are useful for what use cases. Eventually, by building on the results generated from this workflow, a more complete version of the taxonomy will increasingly allow consumers to find relevant methods for their target use cases and researchers to identify applicable use cases for their proposed methods.
doi_str_mv 10.48550/arxiv.2103.06254
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2103_06254</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2103_06254</sourcerecordid><originalsourceid>FETCH-LOGICAL-a674-9dceb22ce1605434302b19bd18772f8cfcb2d67f30a4c9ec7c9f6f058723f4fe3</originalsourceid><addsrcrecordid>eNotz7tuwjAYQGEvHSraB-iEXyDBdydsCEqLlIiFPbKd32AJbORYqLx9W9rpbEf6EHqjpBaNlGRh8le41YwSXhPFpHhGq10skK8ZirFnwL1xpxABd2ByDPG4xH26_RRvc7rg_l5OacIl4U0wx5imEtz0gp68OU_w-t8ZOmzfD-vPqtt_7NarrjJKi6odHVjGHFBFpOCCE2Zpa0faaM1847yzbFTac2KEa8Fp13rliWw041544DM0_9s-DMM1h4vJ9-HXMjws_BtfnERV</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Interpretable Machine Learning: Moving From Mythos to Diagnostics</title><source>arXiv.org</source><creator>Chen, Valerie ; Li, Jeffrey ; Kim, Joon Sik ; Plumb, Gregory ; Talwalkar, Ameet</creator><creatorcontrib>Chen, Valerie ; Li, Jeffrey ; Kim, Joon Sik ; Plumb, Gregory ; Talwalkar, Ameet</creatorcontrib><description>Despite increasing interest in the field of Interpretable Machine Learning (IML), a significant gap persists between the technical objectives targeted by researchers' methods and the high-level goals of consumers' use cases. In this work, we synthesize foundational work on IML methods and evaluation into an actionable taxonomy. This taxonomy serves as a tool to conceptualize the gap between researchers and consumers, illustrated by the lack of connections between its methods and use cases components. It also provides the foundation from which we describe a three-step workflow to better enable researchers and consumers to work together to discover what types of methods are useful for what use cases. Eventually, by building on the results generated from this workflow, a more complete version of the taxonomy will increasingly allow consumers to find relevant methods for their target use cases and researchers to identify applicable use cases for their proposed methods.</description><identifier>DOI: 10.48550/arxiv.2103.06254</identifier><language>eng</language><subject>Computer Science - Learning</subject><creationdate>2021-03</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2103.06254$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2103.06254$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Chen, Valerie</creatorcontrib><creatorcontrib>Li, Jeffrey</creatorcontrib><creatorcontrib>Kim, Joon Sik</creatorcontrib><creatorcontrib>Plumb, Gregory</creatorcontrib><creatorcontrib>Talwalkar, Ameet</creatorcontrib><title>Interpretable Machine Learning: Moving From Mythos to Diagnostics</title><description>Despite increasing interest in the field of Interpretable Machine Learning (IML), a significant gap persists between the technical objectives targeted by researchers' methods and the high-level goals of consumers' use cases. In this work, we synthesize foundational work on IML methods and evaluation into an actionable taxonomy. This taxonomy serves as a tool to conceptualize the gap between researchers and consumers, illustrated by the lack of connections between its methods and use cases components. It also provides the foundation from which we describe a three-step workflow to better enable researchers and consumers to work together to discover what types of methods are useful for what use cases. Eventually, by building on the results generated from this workflow, a more complete version of the taxonomy will increasingly allow consumers to find relevant methods for their target use cases and researchers to identify applicable use cases for their proposed methods.</description><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotz7tuwjAYQGEvHSraB-iEXyDBdydsCEqLlIiFPbKd32AJbORYqLx9W9rpbEf6EHqjpBaNlGRh8le41YwSXhPFpHhGq10skK8ZirFnwL1xpxABd2ByDPG4xH26_RRvc7rg_l5OacIl4U0wx5imEtz0gp68OU_w-t8ZOmzfD-vPqtt_7NarrjJKi6odHVjGHFBFpOCCE2Zpa0faaM1847yzbFTac2KEa8Fp13rliWw041544DM0_9s-DMM1h4vJ9-HXMjws_BtfnERV</recordid><startdate>20210310</startdate><enddate>20210310</enddate><creator>Chen, Valerie</creator><creator>Li, Jeffrey</creator><creator>Kim, Joon Sik</creator><creator>Plumb, Gregory</creator><creator>Talwalkar, Ameet</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20210310</creationdate><title>Interpretable Machine Learning: Moving From Mythos to Diagnostics</title><author>Chen, Valerie ; Li, Jeffrey ; Kim, Joon Sik ; Plumb, Gregory ; Talwalkar, Ameet</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a674-9dceb22ce1605434302b19bd18772f8cfcb2d67f30a4c9ec7c9f6f058723f4fe3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Chen, Valerie</creatorcontrib><creatorcontrib>Li, Jeffrey</creatorcontrib><creatorcontrib>Kim, Joon Sik</creatorcontrib><creatorcontrib>Plumb, Gregory</creatorcontrib><creatorcontrib>Talwalkar, Ameet</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Chen, Valerie</au><au>Li, Jeffrey</au><au>Kim, Joon Sik</au><au>Plumb, Gregory</au><au>Talwalkar, Ameet</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Interpretable Machine Learning: Moving From Mythos to Diagnostics</atitle><date>2021-03-10</date><risdate>2021</risdate><abstract>Despite increasing interest in the field of Interpretable Machine Learning (IML), a significant gap persists between the technical objectives targeted by researchers' methods and the high-level goals of consumers' use cases. In this work, we synthesize foundational work on IML methods and evaluation into an actionable taxonomy. This taxonomy serves as a tool to conceptualize the gap between researchers and consumers, illustrated by the lack of connections between its methods and use cases components. It also provides the foundation from which we describe a three-step workflow to better enable researchers and consumers to work together to discover what types of methods are useful for what use cases. Eventually, by building on the results generated from this workflow, a more complete version of the taxonomy will increasingly allow consumers to find relevant methods for their target use cases and researchers to identify applicable use cases for their proposed methods.</abstract><doi>10.48550/arxiv.2103.06254</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2103.06254
ispartof
issn
language eng
recordid cdi_arxiv_primary_2103_06254
source arXiv.org
subjects Computer Science - Learning
title Interpretable Machine Learning: Moving From Mythos to Diagnostics
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-03T16%3A13%3A03IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Interpretable%20Machine%20Learning:%20Moving%20From%20Mythos%20to%20Diagnostics&rft.au=Chen,%20Valerie&rft.date=2021-03-10&rft_id=info:doi/10.48550/arxiv.2103.06254&rft_dat=%3Carxiv_GOX%3E2103_06254%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true