Towards User-Focused Research in Training Data Attribution for Human-Centered Explainable AI
While Explainable AI (XAI) aims to make AI understandable and useful to humans, it has been criticised for relying too much on formalism and solutionism, focusing more on mathematical soundness than user needs. We propose an alternative to this bottom-up approach inspired by design thinking: the XAI...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Nguyen, Elisa Bertram, Johannes Kortukov, Evgenii Song, Jean Y Oh, Seong Joon |
description | While Explainable AI (XAI) aims to make AI understandable and useful to
humans, it has been criticised for relying too much on formalism and
solutionism, focusing more on mathematical soundness than user needs. We
propose an alternative to this bottom-up approach inspired by design thinking:
the XAI research community should adopt a top-down, user-focused perspective to
ensure user relevance. We illustrate this with a relatively young subfield of
XAI, Training Data Attribution (TDA). With the surge in TDA research and
growing competition, the field risks repeating the same patterns of
solutionism. We conducted a needfinding study with a diverse group of AI
practitioners to identify potential user needs related to TDA. Through
interviews (N=10) and a systematic survey (N=31), we uncovered new TDA tasks
that are currently largely overlooked. We invite the TDA and XAI communities to
consider these novel tasks and improve the user relevance of their research
outcomes. |
doi_str_mv | 10.48550/arxiv.2409.16978 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2409_16978</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2409_16978</sourcerecordid><originalsourceid>FETCH-arxiv_primary_2409_169783</originalsourceid><addsrcrecordid>eNqFzjEOgkAQheFtLIx6ACvnAiAoKJQEIdga7EzIAItuAguZXRRvLxJ7q9f8L_kYW9uW6Xiua22RBvE0d47lm_bBP3pzdkvbF1Kp4Ko4GXFb9IqXcOGKIxUPEBJSQiGFvMMJNUKgNYm816KVULUESd-gNEIuNafxGA1dPeaY1xyC85LNKqwVX_12wTZxlIaJMTmyjkSD9M6-nmzy7P8XHxRCQKg</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Towards User-Focused Research in Training Data Attribution for Human-Centered Explainable AI</title><source>arXiv.org</source><creator>Nguyen, Elisa ; Bertram, Johannes ; Kortukov, Evgenii ; Song, Jean Y ; Oh, Seong Joon</creator><creatorcontrib>Nguyen, Elisa ; Bertram, Johannes ; Kortukov, Evgenii ; Song, Jean Y ; Oh, Seong Joon</creatorcontrib><description>While Explainable AI (XAI) aims to make AI understandable and useful to
humans, it has been criticised for relying too much on formalism and
solutionism, focusing more on mathematical soundness than user needs. We
propose an alternative to this bottom-up approach inspired by design thinking:
the XAI research community should adopt a top-down, user-focused perspective to
ensure user relevance. We illustrate this with a relatively young subfield of
XAI, Training Data Attribution (TDA). With the surge in TDA research and
growing competition, the field risks repeating the same patterns of
solutionism. We conducted a needfinding study with a diverse group of AI
practitioners to identify potential user needs related to TDA. Through
interviews (N=10) and a systematic survey (N=31), we uncovered new TDA tasks
that are currently largely overlooked. We invite the TDA and XAI communities to
consider these novel tasks and improve the user relevance of their research
outcomes.</description><identifier>DOI: 10.48550/arxiv.2409.16978</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Human-Computer Interaction ; Computer Science - Learning</subject><creationdate>2024-09</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2409.16978$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2409.16978$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Nguyen, Elisa</creatorcontrib><creatorcontrib>Bertram, Johannes</creatorcontrib><creatorcontrib>Kortukov, Evgenii</creatorcontrib><creatorcontrib>Song, Jean Y</creatorcontrib><creatorcontrib>Oh, Seong Joon</creatorcontrib><title>Towards User-Focused Research in Training Data Attribution for Human-Centered Explainable AI</title><description>While Explainable AI (XAI) aims to make AI understandable and useful to
humans, it has been criticised for relying too much on formalism and
solutionism, focusing more on mathematical soundness than user needs. We
propose an alternative to this bottom-up approach inspired by design thinking:
the XAI research community should adopt a top-down, user-focused perspective to
ensure user relevance. We illustrate this with a relatively young subfield of
XAI, Training Data Attribution (TDA). With the surge in TDA research and
growing competition, the field risks repeating the same patterns of
solutionism. We conducted a needfinding study with a diverse group of AI
practitioners to identify potential user needs related to TDA. Through
interviews (N=10) and a systematic survey (N=31), we uncovered new TDA tasks
that are currently largely overlooked. We invite the TDA and XAI communities to
consider these novel tasks and improve the user relevance of their research
outcomes.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Human-Computer Interaction</subject><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNqFzjEOgkAQheFtLIx6ACvnAiAoKJQEIdga7EzIAItuAguZXRRvLxJ7q9f8L_kYW9uW6Xiua22RBvE0d47lm_bBP3pzdkvbF1Kp4Ko4GXFb9IqXcOGKIxUPEBJSQiGFvMMJNUKgNYm816KVULUESd-gNEIuNafxGA1dPeaY1xyC85LNKqwVX_12wTZxlIaJMTmyjkSD9M6-nmzy7P8XHxRCQKg</recordid><startdate>20240925</startdate><enddate>20240925</enddate><creator>Nguyen, Elisa</creator><creator>Bertram, Johannes</creator><creator>Kortukov, Evgenii</creator><creator>Song, Jean Y</creator><creator>Oh, Seong Joon</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240925</creationdate><title>Towards User-Focused Research in Training Data Attribution for Human-Centered Explainable AI</title><author>Nguyen, Elisa ; Bertram, Johannes ; Kortukov, Evgenii ; Song, Jean Y ; Oh, Seong Joon</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-arxiv_primary_2409_169783</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Human-Computer Interaction</topic><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Nguyen, Elisa</creatorcontrib><creatorcontrib>Bertram, Johannes</creatorcontrib><creatorcontrib>Kortukov, Evgenii</creatorcontrib><creatorcontrib>Song, Jean Y</creatorcontrib><creatorcontrib>Oh, Seong Joon</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Nguyen, Elisa</au><au>Bertram, Johannes</au><au>Kortukov, Evgenii</au><au>Song, Jean Y</au><au>Oh, Seong Joon</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Towards User-Focused Research in Training Data Attribution for Human-Centered Explainable AI</atitle><date>2024-09-25</date><risdate>2024</risdate><abstract>While Explainable AI (XAI) aims to make AI understandable and useful to
humans, it has been criticised for relying too much on formalism and
solutionism, focusing more on mathematical soundness than user needs. We
propose an alternative to this bottom-up approach inspired by design thinking:
the XAI research community should adopt a top-down, user-focused perspective to
ensure user relevance. We illustrate this with a relatively young subfield of
XAI, Training Data Attribution (TDA). With the surge in TDA research and
growing competition, the field risks repeating the same patterns of
solutionism. We conducted a needfinding study with a diverse group of AI
practitioners to identify potential user needs related to TDA. Through
interviews (N=10) and a systematic survey (N=31), we uncovered new TDA tasks
that are currently largely overlooked. We invite the TDA and XAI communities to
consider these novel tasks and improve the user relevance of their research
outcomes.</abstract><doi>10.48550/arxiv.2409.16978</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2409.16978 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2409_16978 |
source | arXiv.org |
subjects | Computer Science - Artificial Intelligence Computer Science - Human-Computer Interaction Computer Science - Learning |
title | Towards User-Focused Research in Training Data Attribution for Human-Centered Explainable AI |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-12T16%3A00%3A47IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Towards%20User-Focused%20Research%20in%20Training%20Data%20Attribution%20for%20Human-Centered%20Explainable%20AI&rft.au=Nguyen,%20Elisa&rft.date=2024-09-25&rft_id=info:doi/10.48550/arxiv.2409.16978&rft_dat=%3Carxiv_GOX%3E2409_16978%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |