XpertAI: uncovering model strategies for sub-manifolds

In recent years, Explainable AI (XAI) methods have facilitated profound validation and knowledge extraction from ML models. While extensively studied for classification, few XAI solutions have addressed the challenges specific to regression models. In regression, explanations need to be precisely fo...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Letzgus, Simon, Müller, Klaus-Robert, Montavon, Grégoire
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Letzgus, Simon
Müller, Klaus-Robert
Montavon, Grégoire
description In recent years, Explainable AI (XAI) methods have facilitated profound validation and knowledge extraction from ML models. While extensively studied for classification, few XAI solutions have addressed the challenges specific to regression models. In regression, explanations need to be precisely formulated to address specific user queries (e.g.\ distinguishing between `Why is the output above 0?' and `Why is the output above 50?'). They should furthermore reflect the model's behavior on the relevant data sub-manifold. In this paper, we introduce XpertAI, a framework that disentangles the prediction strategy into multiple range-specific sub-strategies and allows the formulation of precise queries about the model (the `explanandum') as a linear combination of those sub-strategies. XpertAI is formulated generally to work alongside popular XAI attribution techniques, based on occlusion, gradient integration, or reverse propagation. Qualitative and quantitative results, demonstrate the benefits of our approach.
doi_str_mv 10.48550/arxiv.2403.07486
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2403_07486</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2403_07486</sourcerecordid><originalsourceid>FETCH-LOGICAL-a676-83bb54a3a9f24a60f972901a9b018460304aecfa71b16fcf1d22a66b0cf9b0263</originalsourceid><addsrcrecordid>eNotj7tuwkAQRbehiEg-IBX7AzazD4_tdAjlgYREQ0Fnzdo7aCU_0K5Byd8nIalOc3V0jxDPCnJbFQWsKX6GW64tmBxKW-GDwNPFx3mze5HXsZ1uPobxLIep871Mc6TZn4NPkqco09VlA42Bp75Lj2LB1Cf_9M-lOL69Hrcf2f7wvttu9hlhiVllnCssGapZW0LgutQ1KKodqMoiGLDkW6ZSOYXcsuq0JkQHLf9MNJqlWP1p78ebSwwDxa_mN6C5B5hvBlNAJA</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>XpertAI: uncovering model strategies for sub-manifolds</title><source>arXiv.org</source><creator>Letzgus, Simon ; Müller, Klaus-Robert ; Montavon, Grégoire</creator><creatorcontrib>Letzgus, Simon ; Müller, Klaus-Robert ; Montavon, Grégoire</creatorcontrib><description>In recent years, Explainable AI (XAI) methods have facilitated profound validation and knowledge extraction from ML models. While extensively studied for classification, few XAI solutions have addressed the challenges specific to regression models. In regression, explanations need to be precisely formulated to address specific user queries (e.g.\ distinguishing between `Why is the output above 0?' and `Why is the output above 50?'). They should furthermore reflect the model's behavior on the relevant data sub-manifold. In this paper, we introduce XpertAI, a framework that disentangles the prediction strategy into multiple range-specific sub-strategies and allows the formulation of precise queries about the model (the `explanandum') as a linear combination of those sub-strategies. XpertAI is formulated generally to work alongside popular XAI attribution techniques, based on occlusion, gradient integration, or reverse propagation. Qualitative and quantitative results, demonstrate the benefits of our approach.</description><identifier>DOI: 10.48550/arxiv.2403.07486</identifier><language>eng</language><subject>Computer Science - Learning</subject><creationdate>2024-03</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2403.07486$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2403.07486$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Letzgus, Simon</creatorcontrib><creatorcontrib>Müller, Klaus-Robert</creatorcontrib><creatorcontrib>Montavon, Grégoire</creatorcontrib><title>XpertAI: uncovering model strategies for sub-manifolds</title><description>In recent years, Explainable AI (XAI) methods have facilitated profound validation and knowledge extraction from ML models. While extensively studied for classification, few XAI solutions have addressed the challenges specific to regression models. In regression, explanations need to be precisely formulated to address specific user queries (e.g.\ distinguishing between `Why is the output above 0?' and `Why is the output above 50?'). They should furthermore reflect the model's behavior on the relevant data sub-manifold. In this paper, we introduce XpertAI, a framework that disentangles the prediction strategy into multiple range-specific sub-strategies and allows the formulation of precise queries about the model (the `explanandum') as a linear combination of those sub-strategies. XpertAI is formulated generally to work alongside popular XAI attribution techniques, based on occlusion, gradient integration, or reverse propagation. Qualitative and quantitative results, demonstrate the benefits of our approach.</description><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj7tuwkAQRbehiEg-IBX7AzazD4_tdAjlgYREQ0Fnzdo7aCU_0K5Byd8nIalOc3V0jxDPCnJbFQWsKX6GW64tmBxKW-GDwNPFx3mze5HXsZ1uPobxLIep871Mc6TZn4NPkqco09VlA42Bp75Lj2LB1Cf_9M-lOL69Hrcf2f7wvttu9hlhiVllnCssGapZW0LgutQ1KKodqMoiGLDkW6ZSOYXcsuq0JkQHLf9MNJqlWP1p78ebSwwDxa_mN6C5B5hvBlNAJA</recordid><startdate>20240312</startdate><enddate>20240312</enddate><creator>Letzgus, Simon</creator><creator>Müller, Klaus-Robert</creator><creator>Montavon, Grégoire</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240312</creationdate><title>XpertAI: uncovering model strategies for sub-manifolds</title><author>Letzgus, Simon ; Müller, Klaus-Robert ; Montavon, Grégoire</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a676-83bb54a3a9f24a60f972901a9b018460304aecfa71b16fcf1d22a66b0cf9b0263</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Letzgus, Simon</creatorcontrib><creatorcontrib>Müller, Klaus-Robert</creatorcontrib><creatorcontrib>Montavon, Grégoire</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Letzgus, Simon</au><au>Müller, Klaus-Robert</au><au>Montavon, Grégoire</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>XpertAI: uncovering model strategies for sub-manifolds</atitle><date>2024-03-12</date><risdate>2024</risdate><abstract>In recent years, Explainable AI (XAI) methods have facilitated profound validation and knowledge extraction from ML models. While extensively studied for classification, few XAI solutions have addressed the challenges specific to regression models. In regression, explanations need to be precisely formulated to address specific user queries (e.g.\ distinguishing between `Why is the output above 0?' and `Why is the output above 50?'). They should furthermore reflect the model's behavior on the relevant data sub-manifold. In this paper, we introduce XpertAI, a framework that disentangles the prediction strategy into multiple range-specific sub-strategies and allows the formulation of precise queries about the model (the `explanandum') as a linear combination of those sub-strategies. XpertAI is formulated generally to work alongside popular XAI attribution techniques, based on occlusion, gradient integration, or reverse propagation. Qualitative and quantitative results, demonstrate the benefits of our approach.</abstract><doi>10.48550/arxiv.2403.07486</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2403.07486
ispartof
issn
language eng
recordid cdi_arxiv_primary_2403_07486
source arXiv.org
subjects Computer Science - Learning
title XpertAI: uncovering model strategies for sub-manifolds
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-07T01%3A06%3A04IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=XpertAI:%20uncovering%20model%20strategies%20for%20sub-manifolds&rft.au=Letzgus,%20Simon&rft.date=2024-03-12&rft_id=info:doi/10.48550/arxiv.2403.07486&rft_dat=%3Carxiv_GOX%3E2403_07486%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true