Diagrammatization: Rationalizing with diagrammatic AI explanations for abductive-deductive reasoning on hypotheses

Many visualizations have been developed for explainable AI (XAI), but they often require further reasoning by users to interpret. We argue that XAI should support diagrammatic and abductive reasoning for the AI to perform hypothesis generation and evaluation to reduce the interpretability gap. We pr...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Lim, Brian Y, Cahaly, Joseph P, Sng, Chester Y. F, Chew, Adam
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Lim, Brian Y
Cahaly, Joseph P
Sng, Chester Y. F
Chew, Adam
description Many visualizations have been developed for explainable AI (XAI), but they often require further reasoning by users to interpret. We argue that XAI should support diagrammatic and abductive reasoning for the AI to perform hypothesis generation and evaluation to reduce the interpretability gap. We propose Diagrammatization to i) perform Peircean abductive-deductive reasoning, ii) follow domain conventions, and iii) explain with diagrams visually or verbally. We implemented DiagramNet for a clinical application to predict cardiac diagnoses from heart auscultation, and explain with shape-based murmur diagrams. In modeling studies, we found that DiagramNet not only provides faithful murmur shape explanations, but also has better prediction performance than baseline models. We further demonstrate the interpretability and trustworthiness of diagrammatic explanations in a qualitative user study with medical students, showing that clinically-relevant, diagrammatic explanations are preferred over technical saliency map explanations. This work contributes insights into providing domain-conventional abductive explanations for user-centric XAI.
doi_str_mv 10.48550/arxiv.2302.01241
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2302_01241</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2302_01241</sourcerecordid><originalsourceid>FETCH-LOGICAL-a671-fa0d4be161ef74d232e2f9355a7a4c7747472efada56d8155fbbd2b8eab146743</originalsourceid><addsrcrecordid>eNpFj8tqwzAURLXpoqT9gK6qH7AryZLlZBfSVyBQKNmbK-sqFviF5KZJvr6JEyjDMLMYBg4hT5ylslCKvUA4-H0qMiZSxoXk9yS8etgFaFsY_ensvlvQ7ymh8Sff7eivH2tq_1cVXa4pHoYGumkXqesDBWN_qtHvMbF4azQgxL67fPQdrY9DP9YYMT6QOwdNxMdbzsj2_W27-kw2Xx_r1XKTQK554oBZaZDnHJ2WVmQChZtnSoEGWWktzxLowILKbcGVcsZYYQoEw2WuZTYjz9fbCbocgm8hHMsLfDnBZ38SwVhF</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Diagrammatization: Rationalizing with diagrammatic AI explanations for abductive-deductive reasoning on hypotheses</title><source>arXiv.org</source><creator>Lim, Brian Y ; Cahaly, Joseph P ; Sng, Chester Y. F ; Chew, Adam</creator><creatorcontrib>Lim, Brian Y ; Cahaly, Joseph P ; Sng, Chester Y. F ; Chew, Adam</creatorcontrib><description>Many visualizations have been developed for explainable AI (XAI), but they often require further reasoning by users to interpret. We argue that XAI should support diagrammatic and abductive reasoning for the AI to perform hypothesis generation and evaluation to reduce the interpretability gap. We propose Diagrammatization to i) perform Peircean abductive-deductive reasoning, ii) follow domain conventions, and iii) explain with diagrams visually or verbally. We implemented DiagramNet for a clinical application to predict cardiac diagnoses from heart auscultation, and explain with shape-based murmur diagrams. In modeling studies, we found that DiagramNet not only provides faithful murmur shape explanations, but also has better prediction performance than baseline models. We further demonstrate the interpretability and trustworthiness of diagrammatic explanations in a qualitative user study with medical students, showing that clinically-relevant, diagrammatic explanations are preferred over technical saliency map explanations. This work contributes insights into providing domain-conventional abductive explanations for user-centric XAI.</description><identifier>DOI: 10.48550/arxiv.2302.01241</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Human-Computer Interaction ; Computer Science - Learning</subject><creationdate>2023-02</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2302.01241$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2302.01241$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Lim, Brian Y</creatorcontrib><creatorcontrib>Cahaly, Joseph P</creatorcontrib><creatorcontrib>Sng, Chester Y. F</creatorcontrib><creatorcontrib>Chew, Adam</creatorcontrib><title>Diagrammatization: Rationalizing with diagrammatic AI explanations for abductive-deductive reasoning on hypotheses</title><description>Many visualizations have been developed for explainable AI (XAI), but they often require further reasoning by users to interpret. We argue that XAI should support diagrammatic and abductive reasoning for the AI to perform hypothesis generation and evaluation to reduce the interpretability gap. We propose Diagrammatization to i) perform Peircean abductive-deductive reasoning, ii) follow domain conventions, and iii) explain with diagrams visually or verbally. We implemented DiagramNet for a clinical application to predict cardiac diagnoses from heart auscultation, and explain with shape-based murmur diagrams. In modeling studies, we found that DiagramNet not only provides faithful murmur shape explanations, but also has better prediction performance than baseline models. We further demonstrate the interpretability and trustworthiness of diagrammatic explanations in a qualitative user study with medical students, showing that clinically-relevant, diagrammatic explanations are preferred over technical saliency map explanations. This work contributes insights into providing domain-conventional abductive explanations for user-centric XAI.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Human-Computer Interaction</subject><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNpFj8tqwzAURLXpoqT9gK6qH7AryZLlZBfSVyBQKNmbK-sqFviF5KZJvr6JEyjDMLMYBg4hT5ylslCKvUA4-H0qMiZSxoXk9yS8etgFaFsY_ensvlvQ7ymh8Sff7eivH2tq_1cVXa4pHoYGumkXqesDBWN_qtHvMbF4azQgxL67fPQdrY9DP9YYMT6QOwdNxMdbzsj2_W27-kw2Xx_r1XKTQK554oBZaZDnHJ2WVmQChZtnSoEGWWktzxLowILKbcGVcsZYYQoEw2WuZTYjz9fbCbocgm8hHMsLfDnBZ38SwVhF</recordid><startdate>20230202</startdate><enddate>20230202</enddate><creator>Lim, Brian Y</creator><creator>Cahaly, Joseph P</creator><creator>Sng, Chester Y. F</creator><creator>Chew, Adam</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20230202</creationdate><title>Diagrammatization: Rationalizing with diagrammatic AI explanations for abductive-deductive reasoning on hypotheses</title><author>Lim, Brian Y ; Cahaly, Joseph P ; Sng, Chester Y. F ; Chew, Adam</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a671-fa0d4be161ef74d232e2f9355a7a4c7747472efada56d8155fbbd2b8eab146743</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Human-Computer Interaction</topic><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Lim, Brian Y</creatorcontrib><creatorcontrib>Cahaly, Joseph P</creatorcontrib><creatorcontrib>Sng, Chester Y. F</creatorcontrib><creatorcontrib>Chew, Adam</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Lim, Brian Y</au><au>Cahaly, Joseph P</au><au>Sng, Chester Y. F</au><au>Chew, Adam</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Diagrammatization: Rationalizing with diagrammatic AI explanations for abductive-deductive reasoning on hypotheses</atitle><date>2023-02-02</date><risdate>2023</risdate><abstract>Many visualizations have been developed for explainable AI (XAI), but they often require further reasoning by users to interpret. We argue that XAI should support diagrammatic and abductive reasoning for the AI to perform hypothesis generation and evaluation to reduce the interpretability gap. We propose Diagrammatization to i) perform Peircean abductive-deductive reasoning, ii) follow domain conventions, and iii) explain with diagrams visually or verbally. We implemented DiagramNet for a clinical application to predict cardiac diagnoses from heart auscultation, and explain with shape-based murmur diagrams. In modeling studies, we found that DiagramNet not only provides faithful murmur shape explanations, but also has better prediction performance than baseline models. We further demonstrate the interpretability and trustworthiness of diagrammatic explanations in a qualitative user study with medical students, showing that clinically-relevant, diagrammatic explanations are preferred over technical saliency map explanations. This work contributes insights into providing domain-conventional abductive explanations for user-centric XAI.</abstract><doi>10.48550/arxiv.2302.01241</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2302.01241
ispartof
issn
language eng
recordid cdi_arxiv_primary_2302_01241
source arXiv.org
subjects Computer Science - Artificial Intelligence
Computer Science - Human-Computer Interaction
Computer Science - Learning
title Diagrammatization: Rationalizing with diagrammatic AI explanations for abductive-deductive reasoning on hypotheses
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-25T08%3A26%3A09IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Diagrammatization:%20Rationalizing%20with%20diagrammatic%20AI%20explanations%20for%20abductive-deductive%20reasoning%20on%20hypotheses&rft.au=Lim,%20Brian%20Y&rft.date=2023-02-02&rft_id=info:doi/10.48550/arxiv.2302.01241&rft_dat=%3Carxiv_GOX%3E2302_01241%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true