Longitudinal Distance: Towards Accountable Instance Attribution

Previous research in interpretable machine learning (IML) and explainable artificial intelligence (XAI) can be broadly categorized as either focusing on seeking interpretability in the agent's model (i.e., IML) or focusing on the context of the user in addition to the model (i.e., XAI). The for...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Weber, Rosina O, Goel, Prateek, Amiri, Shideh, Simpson, Gideon
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Weber, Rosina O
Goel, Prateek
Amiri, Shideh
Simpson, Gideon
description Previous research in interpretable machine learning (IML) and explainable artificial intelligence (XAI) can be broadly categorized as either focusing on seeking interpretability in the agent's model (i.e., IML) or focusing on the context of the user in addition to the model (i.e., XAI). The former can be categorized as feature or instance attribution. Example- or sample-based methods such as those using or inspired by case-based reasoning (CBR) rely on various approaches to select instances that are not necessarily attributing instances responsible for an agent's decision. Furthermore, existing approaches have focused on interpretability and explainability but fall short when it comes to accountability. Inspired in case-based reasoning principles, this paper introduces a pseudo-metric we call Longitudinal distance and its use to attribute instances to a neural network agent's decision that can be potentially used to build accountable CBR agents.
doi_str_mv 10.48550/arxiv.2108.10437
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2108_10437</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2108_10437</sourcerecordid><originalsourceid>FETCH-LOGICAL-a677-fd2ffe404bd3b7dbf3f9782b037123b3a59907ebd7a0660c5ddc01b423b3fdd73</originalsourceid><addsrcrecordid>eNotz81KxDAUhuFsXMjoBbgyN9B60qRN60bK-DdQcNN9OclJJFBTSVN_7l5mxtW3eOGDh7EbAaVq6xruMP2Er7IS0JYClNSX7GFY4nvIG4WIM38Ma8Zo3T0fl29MtPLe2mWLGc3s-CGeK-9zTsFsOSzxil14nFd3_b87Nj4_jfvXYnh7Oez7ocBG68JT5b1ToAxJo8l46TvdVgakFpU0EuuuA-0MaYSmAVsTWRBGHZsn0nLHbs-3J8H0mcIHpt_pKJlOEvkHNVdEbQ</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Longitudinal Distance: Towards Accountable Instance Attribution</title><source>arXiv.org</source><creator>Weber, Rosina O ; Goel, Prateek ; Amiri, Shideh ; Simpson, Gideon</creator><creatorcontrib>Weber, Rosina O ; Goel, Prateek ; Amiri, Shideh ; Simpson, Gideon</creatorcontrib><description>Previous research in interpretable machine learning (IML) and explainable artificial intelligence (XAI) can be broadly categorized as either focusing on seeking interpretability in the agent's model (i.e., IML) or focusing on the context of the user in addition to the model (i.e., XAI). The former can be categorized as feature or instance attribution. Example- or sample-based methods such as those using or inspired by case-based reasoning (CBR) rely on various approaches to select instances that are not necessarily attributing instances responsible for an agent's decision. Furthermore, existing approaches have focused on interpretability and explainability but fall short when it comes to accountability. Inspired in case-based reasoning principles, this paper introduces a pseudo-metric we call Longitudinal distance and its use to attribute instances to a neural network agent's decision that can be potentially used to build accountable CBR agents.</description><identifier>DOI: 10.48550/arxiv.2108.10437</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence</subject><creationdate>2021-08</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2108.10437$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2108.10437$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Weber, Rosina O</creatorcontrib><creatorcontrib>Goel, Prateek</creatorcontrib><creatorcontrib>Amiri, Shideh</creatorcontrib><creatorcontrib>Simpson, Gideon</creatorcontrib><title>Longitudinal Distance: Towards Accountable Instance Attribution</title><description>Previous research in interpretable machine learning (IML) and explainable artificial intelligence (XAI) can be broadly categorized as either focusing on seeking interpretability in the agent's model (i.e., IML) or focusing on the context of the user in addition to the model (i.e., XAI). The former can be categorized as feature or instance attribution. Example- or sample-based methods such as those using or inspired by case-based reasoning (CBR) rely on various approaches to select instances that are not necessarily attributing instances responsible for an agent's decision. Furthermore, existing approaches have focused on interpretability and explainability but fall short when it comes to accountability. Inspired in case-based reasoning principles, this paper introduces a pseudo-metric we call Longitudinal distance and its use to attribute instances to a neural network agent's decision that can be potentially used to build accountable CBR agents.</description><subject>Computer Science - Artificial Intelligence</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotz81KxDAUhuFsXMjoBbgyN9B60qRN60bK-DdQcNN9OclJJFBTSVN_7l5mxtW3eOGDh7EbAaVq6xruMP2Er7IS0JYClNSX7GFY4nvIG4WIM38Ma8Zo3T0fl29MtPLe2mWLGc3s-CGeK-9zTsFsOSzxil14nFd3_b87Nj4_jfvXYnh7Oez7ocBG68JT5b1ToAxJo8l46TvdVgakFpU0EuuuA-0MaYSmAVsTWRBGHZsn0nLHbs-3J8H0mcIHpt_pKJlOEvkHNVdEbQ</recordid><startdate>20210823</startdate><enddate>20210823</enddate><creator>Weber, Rosina O</creator><creator>Goel, Prateek</creator><creator>Amiri, Shideh</creator><creator>Simpson, Gideon</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20210823</creationdate><title>Longitudinal Distance: Towards Accountable Instance Attribution</title><author>Weber, Rosina O ; Goel, Prateek ; Amiri, Shideh ; Simpson, Gideon</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a677-fd2ffe404bd3b7dbf3f9782b037123b3a59907ebd7a0660c5ddc01b423b3fdd73</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Computer Science - Artificial Intelligence</topic><toplevel>online_resources</toplevel><creatorcontrib>Weber, Rosina O</creatorcontrib><creatorcontrib>Goel, Prateek</creatorcontrib><creatorcontrib>Amiri, Shideh</creatorcontrib><creatorcontrib>Simpson, Gideon</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Weber, Rosina O</au><au>Goel, Prateek</au><au>Amiri, Shideh</au><au>Simpson, Gideon</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Longitudinal Distance: Towards Accountable Instance Attribution</atitle><date>2021-08-23</date><risdate>2021</risdate><abstract>Previous research in interpretable machine learning (IML) and explainable artificial intelligence (XAI) can be broadly categorized as either focusing on seeking interpretability in the agent's model (i.e., IML) or focusing on the context of the user in addition to the model (i.e., XAI). The former can be categorized as feature or instance attribution. Example- or sample-based methods such as those using or inspired by case-based reasoning (CBR) rely on various approaches to select instances that are not necessarily attributing instances responsible for an agent's decision. Furthermore, existing approaches have focused on interpretability and explainability but fall short when it comes to accountability. Inspired in case-based reasoning principles, this paper introduces a pseudo-metric we call Longitudinal distance and its use to attribute instances to a neural network agent's decision that can be potentially used to build accountable CBR agents.</abstract><doi>10.48550/arxiv.2108.10437</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2108.10437
ispartof
issn
language eng
recordid cdi_arxiv_primary_2108_10437
source arXiv.org
subjects Computer Science - Artificial Intelligence
title Longitudinal Distance: Towards Accountable Instance Attribution
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-20T09%3A09%3A39IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Longitudinal%20Distance:%20Towards%20Accountable%20Instance%20Attribution&rft.au=Weber,%20Rosina%20O&rft.date=2021-08-23&rft_id=info:doi/10.48550/arxiv.2108.10437&rft_dat=%3Carxiv_GOX%3E2108_10437%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true