Understanding the Effect of Algorithm Transparency of Model Explanations in Text-to-SQL Semantic Parsing
Explaining the decisions of AI has become vital for fostering appropriate user trust in these systems. This paper investigates explanations for a structured prediction task called ``text-to-SQL Semantic Parsing'', which translates a natural language question into a structured query languag...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Rai, Daking Weiland, Rydia R Herrera, Kayla Margaret Gabriella Shaw, Tyler H Yao, Ziyu |
description | Explaining the decisions of AI has become vital for fostering appropriate
user trust in these systems. This paper investigates explanations for a
structured prediction task called ``text-to-SQL Semantic Parsing'', which
translates a natural language question into a structured query language (SQL)
program. In this task setting, we designed three levels of model explanation,
each exposing a different amount of the model's decision-making details (called
``algorithm transparency''), and investigated how different model explanations
could potentially yield different impacts on the user experience. Our study
with $\sim$100 participants shows that (1) the low-/high-transparency
explanations often lead to less/more user reliance on the model decisions,
whereas the medium-transparency explanations strike a good balance. We also
show that (2) only the medium-transparency participant group was able to engage
further in the interaction and exhibit increasing performance over time, and
that (3) they showed the least changes in trust before and after the study. |
doi_str_mv | 10.48550/arxiv.2410.16283 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2410_16283</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2410_16283</sourcerecordid><originalsourceid>FETCH-arxiv_primary_2410_162833</originalsourceid><addsrcrecordid>eNqFjs0KgkAUhWfTIqoHaNV9Ac3fcBthtCgotLVcdEYH9I7MDKFvn0r7VgfO-Th8jO19z42SOPaOqAf5cYNoKvxTkIRr1ryp4tpYpEpSDbbhkArBSwtKwLmtlZa26SDXSKZHzakc5-WhKt5COvQtElqpyIAkyPlgHauc7HWHjHdIVpbwRG2m6y1bCWwN3_1yww7XNL_cnMWp6LXsUI_F7FYsbuF_4guuL0U0</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Understanding the Effect of Algorithm Transparency of Model Explanations in Text-to-SQL Semantic Parsing</title><source>arXiv.org</source><creator>Rai, Daking ; Weiland, Rydia R ; Herrera, Kayla Margaret Gabriella ; Shaw, Tyler H ; Yao, Ziyu</creator><creatorcontrib>Rai, Daking ; Weiland, Rydia R ; Herrera, Kayla Margaret Gabriella ; Shaw, Tyler H ; Yao, Ziyu</creatorcontrib><description>Explaining the decisions of AI has become vital for fostering appropriate
user trust in these systems. This paper investigates explanations for a
structured prediction task called ``text-to-SQL Semantic Parsing'', which
translates a natural language question into a structured query language (SQL)
program. In this task setting, we designed three levels of model explanation,
each exposing a different amount of the model's decision-making details (called
``algorithm transparency''), and investigated how different model explanations
could potentially yield different impacts on the user experience. Our study
with $\sim$100 participants shows that (1) the low-/high-transparency
explanations often lead to less/more user reliance on the model decisions,
whereas the medium-transparency explanations strike a good balance. We also
show that (2) only the medium-transparency participant group was able to engage
further in the interaction and exhibit increasing performance over time, and
that (3) they showed the least changes in trust before and after the study.</description><identifier>DOI: 10.48550/arxiv.2410.16283</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Computation and Language ; Computer Science - Human-Computer Interaction ; Computer Science - Information Retrieval</subject><creationdate>2024-10</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,781,886</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2410.16283$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2410.16283$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Rai, Daking</creatorcontrib><creatorcontrib>Weiland, Rydia R</creatorcontrib><creatorcontrib>Herrera, Kayla Margaret Gabriella</creatorcontrib><creatorcontrib>Shaw, Tyler H</creatorcontrib><creatorcontrib>Yao, Ziyu</creatorcontrib><title>Understanding the Effect of Algorithm Transparency of Model Explanations in Text-to-SQL Semantic Parsing</title><description>Explaining the decisions of AI has become vital for fostering appropriate
user trust in these systems. This paper investigates explanations for a
structured prediction task called ``text-to-SQL Semantic Parsing'', which
translates a natural language question into a structured query language (SQL)
program. In this task setting, we designed three levels of model explanation,
each exposing a different amount of the model's decision-making details (called
``algorithm transparency''), and investigated how different model explanations
could potentially yield different impacts on the user experience. Our study
with $\sim$100 participants shows that (1) the low-/high-transparency
explanations often lead to less/more user reliance on the model decisions,
whereas the medium-transparency explanations strike a good balance. We also
show that (2) only the medium-transparency participant group was able to engage
further in the interaction and exhibit increasing performance over time, and
that (3) they showed the least changes in trust before and after the study.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Computation and Language</subject><subject>Computer Science - Human-Computer Interaction</subject><subject>Computer Science - Information Retrieval</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNqFjs0KgkAUhWfTIqoHaNV9Ac3fcBthtCgotLVcdEYH9I7MDKFvn0r7VgfO-Th8jO19z42SOPaOqAf5cYNoKvxTkIRr1ryp4tpYpEpSDbbhkArBSwtKwLmtlZa26SDXSKZHzakc5-WhKt5COvQtElqpyIAkyPlgHauc7HWHjHdIVpbwRG2m6y1bCWwN3_1yww7XNL_cnMWp6LXsUI_F7FYsbuF_4guuL0U0</recordid><startdate>20241004</startdate><enddate>20241004</enddate><creator>Rai, Daking</creator><creator>Weiland, Rydia R</creator><creator>Herrera, Kayla Margaret Gabriella</creator><creator>Shaw, Tyler H</creator><creator>Yao, Ziyu</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20241004</creationdate><title>Understanding the Effect of Algorithm Transparency of Model Explanations in Text-to-SQL Semantic Parsing</title><author>Rai, Daking ; Weiland, Rydia R ; Herrera, Kayla Margaret Gabriella ; Shaw, Tyler H ; Yao, Ziyu</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-arxiv_primary_2410_162833</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Computation and Language</topic><topic>Computer Science - Human-Computer Interaction</topic><topic>Computer Science - Information Retrieval</topic><toplevel>online_resources</toplevel><creatorcontrib>Rai, Daking</creatorcontrib><creatorcontrib>Weiland, Rydia R</creatorcontrib><creatorcontrib>Herrera, Kayla Margaret Gabriella</creatorcontrib><creatorcontrib>Shaw, Tyler H</creatorcontrib><creatorcontrib>Yao, Ziyu</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Rai, Daking</au><au>Weiland, Rydia R</au><au>Herrera, Kayla Margaret Gabriella</au><au>Shaw, Tyler H</au><au>Yao, Ziyu</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Understanding the Effect of Algorithm Transparency of Model Explanations in Text-to-SQL Semantic Parsing</atitle><date>2024-10-04</date><risdate>2024</risdate><abstract>Explaining the decisions of AI has become vital for fostering appropriate
user trust in these systems. This paper investigates explanations for a
structured prediction task called ``text-to-SQL Semantic Parsing'', which
translates a natural language question into a structured query language (SQL)
program. In this task setting, we designed three levels of model explanation,
each exposing a different amount of the model's decision-making details (called
``algorithm transparency''), and investigated how different model explanations
could potentially yield different impacts on the user experience. Our study
with $\sim$100 participants shows that (1) the low-/high-transparency
explanations often lead to less/more user reliance on the model decisions,
whereas the medium-transparency explanations strike a good balance. We also
show that (2) only the medium-transparency participant group was able to engage
further in the interaction and exhibit increasing performance over time, and
that (3) they showed the least changes in trust before and after the study.</abstract><doi>10.48550/arxiv.2410.16283</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2410.16283 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2410_16283 |
source | arXiv.org |
subjects | Computer Science - Artificial Intelligence Computer Science - Computation and Language Computer Science - Human-Computer Interaction Computer Science - Information Retrieval |
title | Understanding the Effect of Algorithm Transparency of Model Explanations in Text-to-SQL Semantic Parsing |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-17T04%3A40%3A13IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Understanding%20the%20Effect%20of%20Algorithm%20Transparency%20of%20Model%20Explanations%20in%20Text-to-SQL%20Semantic%20Parsing&rft.au=Rai,%20Daking&rft.date=2024-10-04&rft_id=info:doi/10.48550/arxiv.2410.16283&rft_dat=%3Carxiv_GOX%3E2410_16283%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |