Understanding surrogate explanations: the interplay between complexity, fidelity and coverage
This paper analyses the fundamental ingredients behind surrogate explanations to provide a better understanding of their inner workings. We start our exposition by considering global surrogates, describing the trade-off between complexity of the surrogate and fidelity to the black-box being modelled...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Poyiadzi, Rafael Renard, Xavier Laugel, Thibault Santos-Rodriguez, Raul Detyniecki, Marcin |
description | This paper analyses the fundamental ingredients behind surrogate explanations
to provide a better understanding of their inner workings. We start our
exposition by considering global surrogates, describing the trade-off between
complexity of the surrogate and fidelity to the black-box being modelled. We
show that transitioning from global to local - reducing coverage - allows for
more favourable conditions on the Pareto frontier of fidelity-complexity of a
surrogate. We discuss the interplay between complexity, fidelity and coverage,
and consider how different user needs can lead to problem formulations where
these are either constraints or penalties. We also present experiments that
demonstrate how the local surrogate interpretability procedure can be made
interactive and lead to better explanations. |
doi_str_mv | 10.48550/arxiv.2107.04309 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2107_04309</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2107_04309</sourcerecordid><originalsourceid>FETCH-LOGICAL-a679-c71688163dd2e8e3e369a085b8057d011e7b5bc779ffbccad9e6a8558d35bf0c3</originalsourceid><addsrcrecordid>eNotj8tOwzAURL1hgQofwAp_AAl2XccOO1TxkiqxKUsUXds3wVLqRI4pyd9jSlczmpGOdAi54azcaCnZPcTZH8s1Z6pkG8HqS_L5ERzGKUFwPnR0-o5x6CAhxXnsIUDyQ5geaPpC6kPCmMeFGkw_iIHa4TD2OPu03NHWO-xzo5mUjyNG6PCKXLTQT3h9zhXZPz_tt6_F7v3lbfu4K6BSdWEVr7TmlXBujRoFiqoGpqXRTCrHOEdlpLFK1W1rrAVXYwXZRzshTcusWJHbf-zJrxmjP0Bcmj_P5uQpfgEBP1BM</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Understanding surrogate explanations: the interplay between complexity, fidelity and coverage</title><source>arXiv.org</source><creator>Poyiadzi, Rafael ; Renard, Xavier ; Laugel, Thibault ; Santos-Rodriguez, Raul ; Detyniecki, Marcin</creator><creatorcontrib>Poyiadzi, Rafael ; Renard, Xavier ; Laugel, Thibault ; Santos-Rodriguez, Raul ; Detyniecki, Marcin</creatorcontrib><description>This paper analyses the fundamental ingredients behind surrogate explanations
to provide a better understanding of their inner workings. We start our
exposition by considering global surrogates, describing the trade-off between
complexity of the surrogate and fidelity to the black-box being modelled. We
show that transitioning from global to local - reducing coverage - allows for
more favourable conditions on the Pareto frontier of fidelity-complexity of a
surrogate. We discuss the interplay between complexity, fidelity and coverage,
and consider how different user needs can lead to problem formulations where
these are either constraints or penalties. We also present experiments that
demonstrate how the local surrogate interpretability procedure can be made
interactive and lead to better explanations.</description><identifier>DOI: 10.48550/arxiv.2107.04309</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Learning</subject><creationdate>2021-07</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,777,882</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2107.04309$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2107.04309$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Poyiadzi, Rafael</creatorcontrib><creatorcontrib>Renard, Xavier</creatorcontrib><creatorcontrib>Laugel, Thibault</creatorcontrib><creatorcontrib>Santos-Rodriguez, Raul</creatorcontrib><creatorcontrib>Detyniecki, Marcin</creatorcontrib><title>Understanding surrogate explanations: the interplay between complexity, fidelity and coverage</title><description>This paper analyses the fundamental ingredients behind surrogate explanations
to provide a better understanding of their inner workings. We start our
exposition by considering global surrogates, describing the trade-off between
complexity of the surrogate and fidelity to the black-box being modelled. We
show that transitioning from global to local - reducing coverage - allows for
more favourable conditions on the Pareto frontier of fidelity-complexity of a
surrogate. We discuss the interplay between complexity, fidelity and coverage,
and consider how different user needs can lead to problem formulations where
these are either constraints or penalties. We also present experiments that
demonstrate how the local surrogate interpretability procedure can be made
interactive and lead to better explanations.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj8tOwzAURL1hgQofwAp_AAl2XccOO1TxkiqxKUsUXds3wVLqRI4pyd9jSlczmpGOdAi54azcaCnZPcTZH8s1Z6pkG8HqS_L5ERzGKUFwPnR0-o5x6CAhxXnsIUDyQ5geaPpC6kPCmMeFGkw_iIHa4TD2OPu03NHWO-xzo5mUjyNG6PCKXLTQT3h9zhXZPz_tt6_F7v3lbfu4K6BSdWEVr7TmlXBujRoFiqoGpqXRTCrHOEdlpLFK1W1rrAVXYwXZRzshTcusWJHbf-zJrxmjP0Bcmj_P5uQpfgEBP1BM</recordid><startdate>20210709</startdate><enddate>20210709</enddate><creator>Poyiadzi, Rafael</creator><creator>Renard, Xavier</creator><creator>Laugel, Thibault</creator><creator>Santos-Rodriguez, Raul</creator><creator>Detyniecki, Marcin</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20210709</creationdate><title>Understanding surrogate explanations: the interplay between complexity, fidelity and coverage</title><author>Poyiadzi, Rafael ; Renard, Xavier ; Laugel, Thibault ; Santos-Rodriguez, Raul ; Detyniecki, Marcin</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a679-c71688163dd2e8e3e369a085b8057d011e7b5bc779ffbccad9e6a8558d35bf0c3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Poyiadzi, Rafael</creatorcontrib><creatorcontrib>Renard, Xavier</creatorcontrib><creatorcontrib>Laugel, Thibault</creatorcontrib><creatorcontrib>Santos-Rodriguez, Raul</creatorcontrib><creatorcontrib>Detyniecki, Marcin</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Poyiadzi, Rafael</au><au>Renard, Xavier</au><au>Laugel, Thibault</au><au>Santos-Rodriguez, Raul</au><au>Detyniecki, Marcin</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Understanding surrogate explanations: the interplay between complexity, fidelity and coverage</atitle><date>2021-07-09</date><risdate>2021</risdate><abstract>This paper analyses the fundamental ingredients behind surrogate explanations
to provide a better understanding of their inner workings. We start our
exposition by considering global surrogates, describing the trade-off between
complexity of the surrogate and fidelity to the black-box being modelled. We
show that transitioning from global to local - reducing coverage - allows for
more favourable conditions on the Pareto frontier of fidelity-complexity of a
surrogate. We discuss the interplay between complexity, fidelity and coverage,
and consider how different user needs can lead to problem formulations where
these are either constraints or penalties. We also present experiments that
demonstrate how the local surrogate interpretability procedure can be made
interactive and lead to better explanations.</abstract><doi>10.48550/arxiv.2107.04309</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2107.04309 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2107_04309 |
source | arXiv.org |
subjects | Computer Science - Artificial Intelligence Computer Science - Learning |
title | Understanding surrogate explanations: the interplay between complexity, fidelity and coverage |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-20T10%3A39%3A36IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Understanding%20surrogate%20explanations:%20the%20interplay%20between%20complexity,%20fidelity%20and%20coverage&rft.au=Poyiadzi,%20Rafael&rft.date=2021-07-09&rft_id=info:doi/10.48550/arxiv.2107.04309&rft_dat=%3Carxiv_GOX%3E2107_04309%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |