Interpretable Active Learning

Active learning has long been a topic of study in machine learning. However, as increasingly complex and opaque models have become standard practice, the process of active learning, too, has become more opaque. There has been little investigation into interpreting what specific trends and patterns a...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Phillips, Richard L, Chang, Kyu Hyun, Friedler, Sorelle A
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Phillips, Richard L
Chang, Kyu Hyun
Friedler, Sorelle A
description Active learning has long been a topic of study in machine learning. However, as increasingly complex and opaque models have become standard practice, the process of active learning, too, has become more opaque. There has been little investigation into interpreting what specific trends and patterns an active learning strategy may be exploring. This work expands on the Local Interpretable Model-agnostic Explanations framework (LIME) to provide explanations for active learning recommendations. We demonstrate how LIME can be used to generate locally faithful explanations for an active learning strategy, and how these explanations can be used to understand how different models and datasets explore a problem space over time. In order to quantify the per-subgroup differences in how an active learning strategy queries spatial regions, we introduce a notion of uncertainty bias (based on disparate impact) to measure the discrepancy in the confidence for a model's predictions between one subgroup and another. Using the uncertainty bias measure, we show that our query explanations accurately reflect the subgroup focus of the active learning queries, allowing for an interpretable explanation of what is being learned as points with similar sources of uncertainty have their uncertainty bias resolved. We demonstrate that this technique can be applied to track uncertainty bias over user-defined clusters or automatically generated clusters based on the source of uncertainty.
doi_str_mv 10.48550/arxiv.1708.00049
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_1708_00049</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>1708_00049</sourcerecordid><originalsourceid>FETCH-LOGICAL-a679-f7253d88532df3f5ce21a3e901e2c83af4c24ea9cc4c6cef481abe95aa6464b93</originalsourceid><addsrcrecordid>eNotzssKgkAUgOHZtAjrAVpEvoA2V51ZinQRhDbu5TieCcFEJpF6-8ha_bufj5Ado7HUStEj-Fc3xyylOqaUSrMm-2KY0I8eJ2h6DDM7dTOGJYIfuuG-ISsH_RO3_wakOp-q_BqVt0uRZ2UESWoil3IlWq2V4K0TTlnkDAQaypBbLcBJyyWCsVbaxKKTmkGDRgEkMpGNEQE5_LaLrx599wD_rr_OenGKDwJQN4Y</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Interpretable Active Learning</title><source>arXiv.org</source><creator>Phillips, Richard L ; Chang, Kyu Hyun ; Friedler, Sorelle A</creator><creatorcontrib>Phillips, Richard L ; Chang, Kyu Hyun ; Friedler, Sorelle A</creatorcontrib><description>Active learning has long been a topic of study in machine learning. However, as increasingly complex and opaque models have become standard practice, the process of active learning, too, has become more opaque. There has been little investigation into interpreting what specific trends and patterns an active learning strategy may be exploring. This work expands on the Local Interpretable Model-agnostic Explanations framework (LIME) to provide explanations for active learning recommendations. We demonstrate how LIME can be used to generate locally faithful explanations for an active learning strategy, and how these explanations can be used to understand how different models and datasets explore a problem space over time. In order to quantify the per-subgroup differences in how an active learning strategy queries spatial regions, we introduce a notion of uncertainty bias (based on disparate impact) to measure the discrepancy in the confidence for a model's predictions between one subgroup and another. Using the uncertainty bias measure, we show that our query explanations accurately reflect the subgroup focus of the active learning queries, allowing for an interpretable explanation of what is being learned as points with similar sources of uncertainty have their uncertainty bias resolved. We demonstrate that this technique can be applied to track uncertainty bias over user-defined clusters or automatically generated clusters based on the source of uncertainty.</description><identifier>DOI: 10.48550/arxiv.1708.00049</identifier><language>eng</language><subject>Computer Science - Learning ; Statistics - Machine Learning</subject><creationdate>2017-07</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/1708.00049$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.1708.00049$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Phillips, Richard L</creatorcontrib><creatorcontrib>Chang, Kyu Hyun</creatorcontrib><creatorcontrib>Friedler, Sorelle A</creatorcontrib><title>Interpretable Active Learning</title><description>Active learning has long been a topic of study in machine learning. However, as increasingly complex and opaque models have become standard practice, the process of active learning, too, has become more opaque. There has been little investigation into interpreting what specific trends and patterns an active learning strategy may be exploring. This work expands on the Local Interpretable Model-agnostic Explanations framework (LIME) to provide explanations for active learning recommendations. We demonstrate how LIME can be used to generate locally faithful explanations for an active learning strategy, and how these explanations can be used to understand how different models and datasets explore a problem space over time. In order to quantify the per-subgroup differences in how an active learning strategy queries spatial regions, we introduce a notion of uncertainty bias (based on disparate impact) to measure the discrepancy in the confidence for a model's predictions between one subgroup and another. Using the uncertainty bias measure, we show that our query explanations accurately reflect the subgroup focus of the active learning queries, allowing for an interpretable explanation of what is being learned as points with similar sources of uncertainty have their uncertainty bias resolved. We demonstrate that this technique can be applied to track uncertainty bias over user-defined clusters or automatically generated clusters based on the source of uncertainty.</description><subject>Computer Science - Learning</subject><subject>Statistics - Machine Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2017</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotzssKgkAUgOHZtAjrAVpEvoA2V51ZinQRhDbu5TieCcFEJpF6-8ha_bufj5Ado7HUStEj-Fc3xyylOqaUSrMm-2KY0I8eJ2h6DDM7dTOGJYIfuuG-ISsH_RO3_wakOp-q_BqVt0uRZ2UESWoil3IlWq2V4K0TTlnkDAQaypBbLcBJyyWCsVbaxKKTmkGDRgEkMpGNEQE5_LaLrx599wD_rr_OenGKDwJQN4Y</recordid><startdate>20170731</startdate><enddate>20170731</enddate><creator>Phillips, Richard L</creator><creator>Chang, Kyu Hyun</creator><creator>Friedler, Sorelle A</creator><scope>AKY</scope><scope>EPD</scope><scope>GOX</scope></search><sort><creationdate>20170731</creationdate><title>Interpretable Active Learning</title><author>Phillips, Richard L ; Chang, Kyu Hyun ; Friedler, Sorelle A</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a679-f7253d88532df3f5ce21a3e901e2c83af4c24ea9cc4c6cef481abe95aa6464b93</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2017</creationdate><topic>Computer Science - Learning</topic><topic>Statistics - Machine Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Phillips, Richard L</creatorcontrib><creatorcontrib>Chang, Kyu Hyun</creatorcontrib><creatorcontrib>Friedler, Sorelle A</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv Statistics</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Phillips, Richard L</au><au>Chang, Kyu Hyun</au><au>Friedler, Sorelle A</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Interpretable Active Learning</atitle><date>2017-07-31</date><risdate>2017</risdate><abstract>Active learning has long been a topic of study in machine learning. However, as increasingly complex and opaque models have become standard practice, the process of active learning, too, has become more opaque. There has been little investigation into interpreting what specific trends and patterns an active learning strategy may be exploring. This work expands on the Local Interpretable Model-agnostic Explanations framework (LIME) to provide explanations for active learning recommendations. We demonstrate how LIME can be used to generate locally faithful explanations for an active learning strategy, and how these explanations can be used to understand how different models and datasets explore a problem space over time. In order to quantify the per-subgroup differences in how an active learning strategy queries spatial regions, we introduce a notion of uncertainty bias (based on disparate impact) to measure the discrepancy in the confidence for a model's predictions between one subgroup and another. Using the uncertainty bias measure, we show that our query explanations accurately reflect the subgroup focus of the active learning queries, allowing for an interpretable explanation of what is being learned as points with similar sources of uncertainty have their uncertainty bias resolved. We demonstrate that this technique can be applied to track uncertainty bias over user-defined clusters or automatically generated clusters based on the source of uncertainty.</abstract><doi>10.48550/arxiv.1708.00049</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.1708.00049
ispartof
issn
language eng
recordid cdi_arxiv_primary_1708_00049
source arXiv.org
subjects Computer Science - Learning
Statistics - Machine Learning
title Interpretable Active Learning
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-29T19%3A29%3A46IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Interpretable%20Active%20Learning&rft.au=Phillips,%20Richard%20L&rft.date=2017-07-31&rft_id=info:doi/10.48550/arxiv.1708.00049&rft_dat=%3Carxiv_GOX%3E1708_00049%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true