Model-agnostic interpretation by visualization of feature perturbations

Interpretation of machine learning models has become one of the most important research topics due to the necessity of maintaining control and avoiding bias in these algorithms. Since many machine learning algorithms are published every day, there is a need for novel model-agnostic interpretation ap...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Marcílio-Jr, Wilson E, Eler, Danilo M, Breve, Fabrício
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Marcílio-Jr, Wilson E
Eler, Danilo M
Breve, Fabrício
description Interpretation of machine learning models has become one of the most important research topics due to the necessity of maintaining control and avoiding bias in these algorithms. Since many machine learning algorithms are published every day, there is a need for novel model-agnostic interpretation approaches that could be used to interpret a great variety of algorithms. Thus, one advantageous way to interpret machine learning models is to feed different input data to understand the changes in the prediction. Using such an approach, practitioners can define relations among data patterns and a model's decision. This work proposes a model-agnostic interpretation approach that uses visualization of feature perturbations induced by the PSO algorithm. We validate our approach on publicly available datasets, showing the capability to enhance the interpretation of different classifiers while yielding very stable results compared with state-of-the-art algorithms.
doi_str_mv 10.48550/arxiv.2101.10502
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2101_10502</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2101_10502</sourcerecordid><originalsourceid>FETCH-LOGICAL-a672-bf8f2f1d345af04833f2bad2aee6ebb478e7f282a88da07b3b0d9b0335e3e2463</originalsourceid><addsrcrecordid>eNotj7FOwzAUAL0woMIHMOEfSHh-thN3RBUUpCKW7tEzfkaW0iRy3Iry9UDKdNINJ50Qdwpq46yFB8pf6VSjAlUrsIDXYvs2Bu4r-hzGuaQPmYbCecpcqKRxkP4sT2k-Up--L2KMMjKVY2Y5cf6lX_x8I64i9TPf_nMl9s9P-81LtXvfvm4edxU1LVY-uohRBW0sRTBO64ieAhJzw96b1nEb0SE5Fwharz2EtQetLWtG0-iVuL9kl5NuyulA-dz9HXXLkf4BcAZISw</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Model-agnostic interpretation by visualization of feature perturbations</title><source>arXiv.org</source><creator>Marcílio-Jr, Wilson E ; Eler, Danilo M ; Breve, Fabrício</creator><creatorcontrib>Marcílio-Jr, Wilson E ; Eler, Danilo M ; Breve, Fabrício</creatorcontrib><description>Interpretation of machine learning models has become one of the most important research topics due to the necessity of maintaining control and avoiding bias in these algorithms. Since many machine learning algorithms are published every day, there is a need for novel model-agnostic interpretation approaches that could be used to interpret a great variety of algorithms. Thus, one advantageous way to interpret machine learning models is to feed different input data to understand the changes in the prediction. Using such an approach, practitioners can define relations among data patterns and a model's decision. This work proposes a model-agnostic interpretation approach that uses visualization of feature perturbations induced by the PSO algorithm. We validate our approach on publicly available datasets, showing the capability to enhance the interpretation of different classifiers while yielding very stable results compared with state-of-the-art algorithms.</description><identifier>DOI: 10.48550/arxiv.2101.10502</identifier><language>eng</language><subject>Computer Science - Learning</subject><creationdate>2021-01</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2101.10502$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2101.10502$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Marcílio-Jr, Wilson E</creatorcontrib><creatorcontrib>Eler, Danilo M</creatorcontrib><creatorcontrib>Breve, Fabrício</creatorcontrib><title>Model-agnostic interpretation by visualization of feature perturbations</title><description>Interpretation of machine learning models has become one of the most important research topics due to the necessity of maintaining control and avoiding bias in these algorithms. Since many machine learning algorithms are published every day, there is a need for novel model-agnostic interpretation approaches that could be used to interpret a great variety of algorithms. Thus, one advantageous way to interpret machine learning models is to feed different input data to understand the changes in the prediction. Using such an approach, practitioners can define relations among data patterns and a model's decision. This work proposes a model-agnostic interpretation approach that uses visualization of feature perturbations induced by the PSO algorithm. We validate our approach on publicly available datasets, showing the capability to enhance the interpretation of different classifiers while yielding very stable results compared with state-of-the-art algorithms.</description><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj7FOwzAUAL0woMIHMOEfSHh-thN3RBUUpCKW7tEzfkaW0iRy3Iry9UDKdNINJ50Qdwpq46yFB8pf6VSjAlUrsIDXYvs2Bu4r-hzGuaQPmYbCecpcqKRxkP4sT2k-Up--L2KMMjKVY2Y5cf6lX_x8I64i9TPf_nMl9s9P-81LtXvfvm4edxU1LVY-uohRBW0sRTBO64ieAhJzw96b1nEb0SE5Fwharz2EtQetLWtG0-iVuL9kl5NuyulA-dz9HXXLkf4BcAZISw</recordid><startdate>20210125</startdate><enddate>20210125</enddate><creator>Marcílio-Jr, Wilson E</creator><creator>Eler, Danilo M</creator><creator>Breve, Fabrício</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20210125</creationdate><title>Model-agnostic interpretation by visualization of feature perturbations</title><author>Marcílio-Jr, Wilson E ; Eler, Danilo M ; Breve, Fabrício</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a672-bf8f2f1d345af04833f2bad2aee6ebb478e7f282a88da07b3b0d9b0335e3e2463</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Marcílio-Jr, Wilson E</creatorcontrib><creatorcontrib>Eler, Danilo M</creatorcontrib><creatorcontrib>Breve, Fabrício</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Marcílio-Jr, Wilson E</au><au>Eler, Danilo M</au><au>Breve, Fabrício</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Model-agnostic interpretation by visualization of feature perturbations</atitle><date>2021-01-25</date><risdate>2021</risdate><abstract>Interpretation of machine learning models has become one of the most important research topics due to the necessity of maintaining control and avoiding bias in these algorithms. Since many machine learning algorithms are published every day, there is a need for novel model-agnostic interpretation approaches that could be used to interpret a great variety of algorithms. Thus, one advantageous way to interpret machine learning models is to feed different input data to understand the changes in the prediction. Using such an approach, practitioners can define relations among data patterns and a model's decision. This work proposes a model-agnostic interpretation approach that uses visualization of feature perturbations induced by the PSO algorithm. We validate our approach on publicly available datasets, showing the capability to enhance the interpretation of different classifiers while yielding very stable results compared with state-of-the-art algorithms.</abstract><doi>10.48550/arxiv.2101.10502</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2101.10502
ispartof
issn
language eng
recordid cdi_arxiv_primary_2101_10502
source arXiv.org
subjects Computer Science - Learning
title Model-agnostic interpretation by visualization of feature perturbations
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-29T02%3A56%3A50IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Model-agnostic%20interpretation%20by%20visualization%20of%20feature%20perturbations&rft.au=Marc%C3%ADlio-Jr,%20Wilson%20E&rft.date=2021-01-25&rft_id=info:doi/10.48550/arxiv.2101.10502&rft_dat=%3Carxiv_GOX%3E2101_10502%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true