A Deep Dive into Perturbations as Evaluation Technique for Time Series XAI

Explainable Artificial Intelligence (XAI) has gained significant attention recently as the demand for transparency and interpretability of machine learning models has increased. In particular, XAI for time series data has become increasingly important in finance, healthcare, and climate science. How...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Schlegel, Udo, Keim, Daniel A
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Schlegel, Udo
Keim, Daniel A
description Explainable Artificial Intelligence (XAI) has gained significant attention recently as the demand for transparency and interpretability of machine learning models has increased. In particular, XAI for time series data has become increasingly important in finance, healthcare, and climate science. However, evaluating the quality of explanations, such as attributions provided by XAI techniques, remains challenging. This paper provides an in-depth analysis of using perturbations to evaluate attributions extracted from time series models. A perturbation analysis involves systematically modifying the input data and evaluating the impact on the attributions generated by the XAI method. We apply this approach to several state-of-the-art XAI techniques and evaluate their performance on three time series classification datasets. Our results demonstrate that the perturbation analysis approach can effectively evaluate the quality of attributions and provide insights into the strengths and limitations of XAI techniques. Such an approach can guide the selection of XAI methods for time series data, e.g., focusing on return time rather than precision, and facilitate the development of more reliable and interpretable machine learning models for time series analysis.
doi_str_mv 10.48550/arxiv.2307.05104
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2307_05104</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2307_05104</sourcerecordid><originalsourceid>FETCH-LOGICAL-a674-3be8661be450b29920e8398fb1847e93fd45da4351adcc2c327d800b01e3811e3</originalsourceid><addsrcrecordid>eNotz7tOw0AUBNBtUqCQD6DK_QE7d1_2urSSEIIigYSLdNaufS1WSuywfgj-HjA0M5pmpMPYA8dYGa1xY8Onn2IhMY1Rc1R37DmHHdENdn4i8O3QwSuFYQzODr5re7A97Cd7GecJBVXvrf8YCZouQOGvBG8UPPVwzo_3bNHYS0-r_16y4nFfbJ-i08vhuM1PkU1SFUlHJkm4I6XRiSwTSEZmpnHcqJQy2dRK11ZJzW1dVaKSIq0NokNO0vCfWLL13-2MKW_BX234Kn9R5YyS33CiRd8</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>A Deep Dive into Perturbations as Evaluation Technique for Time Series XAI</title><source>arXiv.org</source><creator>Schlegel, Udo ; Keim, Daniel A</creator><creatorcontrib>Schlegel, Udo ; Keim, Daniel A</creatorcontrib><description>Explainable Artificial Intelligence (XAI) has gained significant attention recently as the demand for transparency and interpretability of machine learning models has increased. In particular, XAI for time series data has become increasingly important in finance, healthcare, and climate science. However, evaluating the quality of explanations, such as attributions provided by XAI techniques, remains challenging. This paper provides an in-depth analysis of using perturbations to evaluate attributions extracted from time series models. A perturbation analysis involves systematically modifying the input data and evaluating the impact on the attributions generated by the XAI method. We apply this approach to several state-of-the-art XAI techniques and evaluate their performance on three time series classification datasets. Our results demonstrate that the perturbation analysis approach can effectively evaluate the quality of attributions and provide insights into the strengths and limitations of XAI techniques. Such an approach can guide the selection of XAI methods for time series data, e.g., focusing on return time rather than precision, and facilitate the development of more reliable and interpretable machine learning models for time series analysis.</description><identifier>DOI: 10.48550/arxiv.2307.05104</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Learning</subject><creationdate>2023-07</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2307.05104$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2307.05104$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Schlegel, Udo</creatorcontrib><creatorcontrib>Keim, Daniel A</creatorcontrib><title>A Deep Dive into Perturbations as Evaluation Technique for Time Series XAI</title><description>Explainable Artificial Intelligence (XAI) has gained significant attention recently as the demand for transparency and interpretability of machine learning models has increased. In particular, XAI for time series data has become increasingly important in finance, healthcare, and climate science. However, evaluating the quality of explanations, such as attributions provided by XAI techniques, remains challenging. This paper provides an in-depth analysis of using perturbations to evaluate attributions extracted from time series models. A perturbation analysis involves systematically modifying the input data and evaluating the impact on the attributions generated by the XAI method. We apply this approach to several state-of-the-art XAI techniques and evaluate their performance on three time series classification datasets. Our results demonstrate that the perturbation analysis approach can effectively evaluate the quality of attributions and provide insights into the strengths and limitations of XAI techniques. Such an approach can guide the selection of XAI methods for time series data, e.g., focusing on return time rather than precision, and facilitate the development of more reliable and interpretable machine learning models for time series analysis.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotz7tOw0AUBNBtUqCQD6DK_QE7d1_2urSSEIIigYSLdNaufS1WSuywfgj-HjA0M5pmpMPYA8dYGa1xY8Onn2IhMY1Rc1R37DmHHdENdn4i8O3QwSuFYQzODr5re7A97Cd7GecJBVXvrf8YCZouQOGvBG8UPPVwzo_3bNHYS0-r_16y4nFfbJ-i08vhuM1PkU1SFUlHJkm4I6XRiSwTSEZmpnHcqJQy2dRK11ZJzW1dVaKSIq0NokNO0vCfWLL13-2MKW_BX234Kn9R5YyS33CiRd8</recordid><startdate>20230711</startdate><enddate>20230711</enddate><creator>Schlegel, Udo</creator><creator>Keim, Daniel A</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20230711</creationdate><title>A Deep Dive into Perturbations as Evaluation Technique for Time Series XAI</title><author>Schlegel, Udo ; Keim, Daniel A</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a674-3be8661be450b29920e8398fb1847e93fd45da4351adcc2c327d800b01e3811e3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Schlegel, Udo</creatorcontrib><creatorcontrib>Keim, Daniel A</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Schlegel, Udo</au><au>Keim, Daniel A</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>A Deep Dive into Perturbations as Evaluation Technique for Time Series XAI</atitle><date>2023-07-11</date><risdate>2023</risdate><abstract>Explainable Artificial Intelligence (XAI) has gained significant attention recently as the demand for transparency and interpretability of machine learning models has increased. In particular, XAI for time series data has become increasingly important in finance, healthcare, and climate science. However, evaluating the quality of explanations, such as attributions provided by XAI techniques, remains challenging. This paper provides an in-depth analysis of using perturbations to evaluate attributions extracted from time series models. A perturbation analysis involves systematically modifying the input data and evaluating the impact on the attributions generated by the XAI method. We apply this approach to several state-of-the-art XAI techniques and evaluate their performance on three time series classification datasets. Our results demonstrate that the perturbation analysis approach can effectively evaluate the quality of attributions and provide insights into the strengths and limitations of XAI techniques. Such an approach can guide the selection of XAI methods for time series data, e.g., focusing on return time rather than precision, and facilitate the development of more reliable and interpretable machine learning models for time series analysis.</abstract><doi>10.48550/arxiv.2307.05104</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2307.05104
ispartof
issn
language eng
recordid cdi_arxiv_primary_2307_05104
source arXiv.org
subjects Computer Science - Artificial Intelligence
Computer Science - Learning
title A Deep Dive into Perturbations as Evaluation Technique for Time Series XAI
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-25T12%3A15%3A34IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=A%20Deep%20Dive%20into%20Perturbations%20as%20Evaluation%20Technique%20for%20Time%20Series%20XAI&rft.au=Schlegel,%20Udo&rft.date=2023-07-11&rft_id=info:doi/10.48550/arxiv.2307.05104&rft_dat=%3Carxiv_GOX%3E2307_05104%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true