Investigating Explainability of Generative AI for Code through Scenario-based Design
What does it mean for a generative AI model to be explainable? The emergent discipline of explainable AI (XAI) has made great strides in helping people understand discriminative models. Less attention has been paid to generative models that produce artifacts, rather than decisions, as output. Meanwh...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | What does it mean for a generative AI model to be explainable? The emergent
discipline of explainable AI (XAI) has made great strides in helping people
understand discriminative models. Less attention has been paid to generative
models that produce artifacts, rather than decisions, as output. Meanwhile,
generative AI (GenAI) technologies are maturing and being applied to
application domains such as software engineering. Using scenario-based design
and question-driven XAI design approaches, we explore users' explainability
needs for GenAI in three software engineering use cases: natural language to
code, code translation, and code auto-completion. We conducted 9 workshops with
43 software engineers in which real examples from state-of-the-art generative
AI models were used to elicit users' explainability needs. Drawing from prior
work, we also propose 4 types of XAI features for GenAI for code and gathered
additional design ideas from participants. Our work explores explainability
needs for GenAI for code and demonstrates how human-centered approaches can
drive the technical development of XAI in novel domains. |
---|---|
DOI: | 10.48550/arxiv.2202.04903 |