Understanding the Effect of Algorithm Transparency of Model Explanations in Text-to-SQL Semantic Parsing
Explaining the decisions of AI has become vital for fostering appropriate user trust in these systems. This paper investigates explanations for a structured prediction task called ``text-to-SQL Semantic Parsing'', which translates a natural language question into a structured query languag...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Explaining the decisions of AI has become vital for fostering appropriate
user trust in these systems. This paper investigates explanations for a
structured prediction task called ``text-to-SQL Semantic Parsing'', which
translates a natural language question into a structured query language (SQL)
program. In this task setting, we designed three levels of model explanation,
each exposing a different amount of the model's decision-making details (called
``algorithm transparency''), and investigated how different model explanations
could potentially yield different impacts on the user experience. Our study
with $\sim$100 participants shows that (1) the low-/high-transparency
explanations often lead to less/more user reliance on the model decisions,
whereas the medium-transparency explanations strike a good balance. We also
show that (2) only the medium-transparency participant group was able to engage
further in the interaction and exhibit increasing performance over time, and
that (3) they showed the least changes in trust before and after the study. |
---|---|
DOI: | 10.48550/arxiv.2410.16283 |