On the Safety of Interpretable Machine Learning: A Maximum Deviation Approach

Interpretable and explainable machine learning has seen a recent surge of interest. We focus on safety as a key motivation behind the surge and make the relationship between interpretability and safety more quantitative. Toward assessing safety, we introduce the concept of maximum deviation via an o...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Wei, Dennis, Nair, Rahul, Dhurandhar, Amit, Varshney, Kush R, Daly, Elizabeth M, Singh, Moninder
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Wei, Dennis
Nair, Rahul
Dhurandhar, Amit
Varshney, Kush R
Daly, Elizabeth M
Singh, Moninder
description Interpretable and explainable machine learning has seen a recent surge of interest. We focus on safety as a key motivation behind the surge and make the relationship between interpretability and safety more quantitative. Toward assessing safety, we introduce the concept of maximum deviation via an optimization problem to find the largest deviation of a supervised learning model from a reference model regarded as safe. We then show how interpretability facilitates this safety assessment. For models including decision trees, generalized linear and additive models, the maximum deviation can be computed exactly and efficiently. For tree ensembles, which are not regarded as interpretable, discrete optimization techniques can still provide informative bounds. For a broader class of piecewise Lipschitz functions, we leverage the multi-armed bandit literature to show that interpretability produces tighter (regret) bounds on the maximum deviation. We present case studies, including one on mortgage approval, to illustrate our methods and the insights about models that may be obtained from deviation maximization.
doi_str_mv 10.48550/arxiv.2211.01498
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2211_01498</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2211_01498</sourcerecordid><originalsourceid>FETCH-LOGICAL-a678-5b4c6cc759e26a621143cd2442bbb046449836bcf240480c6114c8b0ad7150e53</originalsourceid><addsrcrecordid>eNotj8tuwjAURL3poqL9gK7qH0hqO7Zj2EX0hRTEouyja3MDlogTuS6Cv69LWY00Go3OIeSJs1IapdgLxLM_lUJwXjIu5-aerDeBpgPSL-gxXejY01VIGKeICewR6RrcwQekLUIMPuwXtMnd2Q8_A33Fk4fkx0CbaYpjXj6Qux6O3_h4yxnZvr9tl59Fu_lYLZu2AF2bQlnptHO1mqPQoDONrNxOSCmstUxqmckqbV0vJJOGOZ0HzlgGu5orhqqakef_26tPN0U_QLx0f17d1av6BX2-RtU</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>On the Safety of Interpretable Machine Learning: A Maximum Deviation Approach</title><source>arXiv.org</source><creator>Wei, Dennis ; Nair, Rahul ; Dhurandhar, Amit ; Varshney, Kush R ; Daly, Elizabeth M ; Singh, Moninder</creator><creatorcontrib>Wei, Dennis ; Nair, Rahul ; Dhurandhar, Amit ; Varshney, Kush R ; Daly, Elizabeth M ; Singh, Moninder</creatorcontrib><description>Interpretable and explainable machine learning has seen a recent surge of interest. We focus on safety as a key motivation behind the surge and make the relationship between interpretability and safety more quantitative. Toward assessing safety, we introduce the concept of maximum deviation via an optimization problem to find the largest deviation of a supervised learning model from a reference model regarded as safe. We then show how interpretability facilitates this safety assessment. For models including decision trees, generalized linear and additive models, the maximum deviation can be computed exactly and efficiently. For tree ensembles, which are not regarded as interpretable, discrete optimization techniques can still provide informative bounds. For a broader class of piecewise Lipschitz functions, we leverage the multi-armed bandit literature to show that interpretability produces tighter (regret) bounds on the maximum deviation. We present case studies, including one on mortgage approval, to illustrate our methods and the insights about models that may be obtained from deviation maximization.</description><identifier>DOI: 10.48550/arxiv.2211.01498</identifier><language>eng</language><subject>Computer Science - Learning ; Statistics - Machine Learning</subject><creationdate>2022-11</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2211.01498$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2211.01498$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Wei, Dennis</creatorcontrib><creatorcontrib>Nair, Rahul</creatorcontrib><creatorcontrib>Dhurandhar, Amit</creatorcontrib><creatorcontrib>Varshney, Kush R</creatorcontrib><creatorcontrib>Daly, Elizabeth M</creatorcontrib><creatorcontrib>Singh, Moninder</creatorcontrib><title>On the Safety of Interpretable Machine Learning: A Maximum Deviation Approach</title><description>Interpretable and explainable machine learning has seen a recent surge of interest. We focus on safety as a key motivation behind the surge and make the relationship between interpretability and safety more quantitative. Toward assessing safety, we introduce the concept of maximum deviation via an optimization problem to find the largest deviation of a supervised learning model from a reference model regarded as safe. We then show how interpretability facilitates this safety assessment. For models including decision trees, generalized linear and additive models, the maximum deviation can be computed exactly and efficiently. For tree ensembles, which are not regarded as interpretable, discrete optimization techniques can still provide informative bounds. For a broader class of piecewise Lipschitz functions, we leverage the multi-armed bandit literature to show that interpretability produces tighter (regret) bounds on the maximum deviation. We present case studies, including one on mortgage approval, to illustrate our methods and the insights about models that may be obtained from deviation maximization.</description><subject>Computer Science - Learning</subject><subject>Statistics - Machine Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj8tuwjAURL3poqL9gK7qH0hqO7Zj2EX0hRTEouyja3MDlogTuS6Cv69LWY00Go3OIeSJs1IapdgLxLM_lUJwXjIu5-aerDeBpgPSL-gxXejY01VIGKeICewR6RrcwQekLUIMPuwXtMnd2Q8_A33Fk4fkx0CbaYpjXj6Qux6O3_h4yxnZvr9tl59Fu_lYLZu2AF2bQlnptHO1mqPQoDONrNxOSCmstUxqmckqbV0vJJOGOZ0HzlgGu5orhqqakef_26tPN0U_QLx0f17d1av6BX2-RtU</recordid><startdate>20221102</startdate><enddate>20221102</enddate><creator>Wei, Dennis</creator><creator>Nair, Rahul</creator><creator>Dhurandhar, Amit</creator><creator>Varshney, Kush R</creator><creator>Daly, Elizabeth M</creator><creator>Singh, Moninder</creator><scope>AKY</scope><scope>EPD</scope><scope>GOX</scope></search><sort><creationdate>20221102</creationdate><title>On the Safety of Interpretable Machine Learning: A Maximum Deviation Approach</title><author>Wei, Dennis ; Nair, Rahul ; Dhurandhar, Amit ; Varshney, Kush R ; Daly, Elizabeth M ; Singh, Moninder</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a678-5b4c6cc759e26a621143cd2442bbb046449836bcf240480c6114c8b0ad7150e53</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Computer Science - Learning</topic><topic>Statistics - Machine Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Wei, Dennis</creatorcontrib><creatorcontrib>Nair, Rahul</creatorcontrib><creatorcontrib>Dhurandhar, Amit</creatorcontrib><creatorcontrib>Varshney, Kush R</creatorcontrib><creatorcontrib>Daly, Elizabeth M</creatorcontrib><creatorcontrib>Singh, Moninder</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv Statistics</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Wei, Dennis</au><au>Nair, Rahul</au><au>Dhurandhar, Amit</au><au>Varshney, Kush R</au><au>Daly, Elizabeth M</au><au>Singh, Moninder</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>On the Safety of Interpretable Machine Learning: A Maximum Deviation Approach</atitle><date>2022-11-02</date><risdate>2022</risdate><abstract>Interpretable and explainable machine learning has seen a recent surge of interest. We focus on safety as a key motivation behind the surge and make the relationship between interpretability and safety more quantitative. Toward assessing safety, we introduce the concept of maximum deviation via an optimization problem to find the largest deviation of a supervised learning model from a reference model regarded as safe. We then show how interpretability facilitates this safety assessment. For models including decision trees, generalized linear and additive models, the maximum deviation can be computed exactly and efficiently. For tree ensembles, which are not regarded as interpretable, discrete optimization techniques can still provide informative bounds. For a broader class of piecewise Lipschitz functions, we leverage the multi-armed bandit literature to show that interpretability produces tighter (regret) bounds on the maximum deviation. We present case studies, including one on mortgage approval, to illustrate our methods and the insights about models that may be obtained from deviation maximization.</abstract><doi>10.48550/arxiv.2211.01498</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2211.01498
ispartof
issn
language eng
recordid cdi_arxiv_primary_2211_01498
source arXiv.org
subjects Computer Science - Learning
Statistics - Machine Learning
title On the Safety of Interpretable Machine Learning: A Maximum Deviation Approach
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-22T22%3A50%3A39IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=On%20the%20Safety%20of%20Interpretable%20Machine%20Learning:%20A%20Maximum%20Deviation%20Approach&rft.au=Wei,%20Dennis&rft.date=2022-11-02&rft_id=info:doi/10.48550/arxiv.2211.01498&rft_dat=%3Carxiv_GOX%3E2211_01498%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true