InterroLang: Exploring NLP Models and Datasets through Dialogue-based Explanations

While recently developed NLP explainability methods let us open the black box in various ways (Madsen et al., 2022), a missing ingredient in this endeavor is an interactive tool offering a conversational interface. Such a dialogue system can help users explore datasets and models with explanations i...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Feldhus, Nils, Wang, Qianli, Anikina, Tatiana, Chopra, Sahil, Oguz, Cennet, Möller, Sebastian
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Feldhus, Nils
Wang, Qianli
Anikina, Tatiana
Chopra, Sahil
Oguz, Cennet
Möller, Sebastian
description While recently developed NLP explainability methods let us open the black box in various ways (Madsen et al., 2022), a missing ingredient in this endeavor is an interactive tool offering a conversational interface. Such a dialogue system can help users explore datasets and models with explanations in a contextualized manner, e.g. via clarification or follow-up questions, and through a natural language interface. We adapt the conversational explanation framework TalkToModel (Slack et al., 2022) to the NLP domain, add new NLP-specific operations such as free-text rationalization, and illustrate its generalizability on three NLP tasks (dialogue act classification, question answering, hate speech detection). To recognize user queries for explanations, we evaluate fine-tuned and few-shot prompting models and implement a novel Adapter-based approach. We then conduct two user studies on (1) the perceived correctness and helpfulness of the dialogues, and (2) the simulatability, i.e. how objectively helpful dialogical explanations are for humans in figuring out the model's predicted label when it's not shown. We found rationalization and feature attribution were helpful in explaining the model behavior. Moreover, users could more reliably predict the model outcome based on an explanation dialogue rather than one-off explanations.
doi_str_mv 10.48550/arxiv.2310.05592
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2310_05592</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2310_05592</sourcerecordid><originalsourceid>FETCH-LOGICAL-a672-f0d9f37f17d750e66290eff05810a1c8790fbf36ce2543c293b79f7760dc58193</originalsourceid><addsrcrecordid>eNotj8tOwzAURL1hgQofwAr_QMqNXdsxO9QWqBQeQt1HN7GdRgp2Zbuo_D0hsBrpaGakQ8hNCctVJQTcYTwPX0vGJwBCaHZJPnY-2xhDjb6_p9vzcQxx8D19rd_pSzB2TBS9oRvMmGxONB9iOPUHuhlwDP3JFu3EzTxEj3kIPl2RC4djstf_uSD7x-1-_VzUb0-79UNdoFSscGC048qVyigBVkqmwToHoioBy65SGlzruOwsEyveMc1bpZ1SEkw3dTRfkNu_21mqOcbhE-N38yvXzHL8B7tdSbo</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>InterroLang: Exploring NLP Models and Datasets through Dialogue-based Explanations</title><source>arXiv.org</source><creator>Feldhus, Nils ; Wang, Qianli ; Anikina, Tatiana ; Chopra, Sahil ; Oguz, Cennet ; Möller, Sebastian</creator><creatorcontrib>Feldhus, Nils ; Wang, Qianli ; Anikina, Tatiana ; Chopra, Sahil ; Oguz, Cennet ; Möller, Sebastian</creatorcontrib><description>While recently developed NLP explainability methods let us open the black box in various ways (Madsen et al., 2022), a missing ingredient in this endeavor is an interactive tool offering a conversational interface. Such a dialogue system can help users explore datasets and models with explanations in a contextualized manner, e.g. via clarification or follow-up questions, and through a natural language interface. We adapt the conversational explanation framework TalkToModel (Slack et al., 2022) to the NLP domain, add new NLP-specific operations such as free-text rationalization, and illustrate its generalizability on three NLP tasks (dialogue act classification, question answering, hate speech detection). To recognize user queries for explanations, we evaluate fine-tuned and few-shot prompting models and implement a novel Adapter-based approach. We then conduct two user studies on (1) the perceived correctness and helpfulness of the dialogues, and (2) the simulatability, i.e. how objectively helpful dialogical explanations are for humans in figuring out the model's predicted label when it's not shown. We found rationalization and feature attribution were helpful in explaining the model behavior. Moreover, users could more reliably predict the model outcome based on an explanation dialogue rather than one-off explanations.</description><identifier>DOI: 10.48550/arxiv.2310.05592</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Computation and Language ; Computer Science - Human-Computer Interaction</subject><creationdate>2023-10</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2310.05592$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2310.05592$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Feldhus, Nils</creatorcontrib><creatorcontrib>Wang, Qianli</creatorcontrib><creatorcontrib>Anikina, Tatiana</creatorcontrib><creatorcontrib>Chopra, Sahil</creatorcontrib><creatorcontrib>Oguz, Cennet</creatorcontrib><creatorcontrib>Möller, Sebastian</creatorcontrib><title>InterroLang: Exploring NLP Models and Datasets through Dialogue-based Explanations</title><description>While recently developed NLP explainability methods let us open the black box in various ways (Madsen et al., 2022), a missing ingredient in this endeavor is an interactive tool offering a conversational interface. Such a dialogue system can help users explore datasets and models with explanations in a contextualized manner, e.g. via clarification or follow-up questions, and through a natural language interface. We adapt the conversational explanation framework TalkToModel (Slack et al., 2022) to the NLP domain, add new NLP-specific operations such as free-text rationalization, and illustrate its generalizability on three NLP tasks (dialogue act classification, question answering, hate speech detection). To recognize user queries for explanations, we evaluate fine-tuned and few-shot prompting models and implement a novel Adapter-based approach. We then conduct two user studies on (1) the perceived correctness and helpfulness of the dialogues, and (2) the simulatability, i.e. how objectively helpful dialogical explanations are for humans in figuring out the model's predicted label when it's not shown. We found rationalization and feature attribution were helpful in explaining the model behavior. Moreover, users could more reliably predict the model outcome based on an explanation dialogue rather than one-off explanations.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Computation and Language</subject><subject>Computer Science - Human-Computer Interaction</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj8tOwzAURL1hgQofwAr_QMqNXdsxO9QWqBQeQt1HN7GdRgp2Zbuo_D0hsBrpaGakQ8hNCctVJQTcYTwPX0vGJwBCaHZJPnY-2xhDjb6_p9vzcQxx8D19rd_pSzB2TBS9oRvMmGxONB9iOPUHuhlwDP3JFu3EzTxEj3kIPl2RC4djstf_uSD7x-1-_VzUb0-79UNdoFSscGC048qVyigBVkqmwToHoioBy65SGlzruOwsEyveMc1bpZ1SEkw3dTRfkNu_21mqOcbhE-N38yvXzHL8B7tdSbo</recordid><startdate>20231009</startdate><enddate>20231009</enddate><creator>Feldhus, Nils</creator><creator>Wang, Qianli</creator><creator>Anikina, Tatiana</creator><creator>Chopra, Sahil</creator><creator>Oguz, Cennet</creator><creator>Möller, Sebastian</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20231009</creationdate><title>InterroLang: Exploring NLP Models and Datasets through Dialogue-based Explanations</title><author>Feldhus, Nils ; Wang, Qianli ; Anikina, Tatiana ; Chopra, Sahil ; Oguz, Cennet ; Möller, Sebastian</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a672-f0d9f37f17d750e66290eff05810a1c8790fbf36ce2543c293b79f7760dc58193</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Computation and Language</topic><topic>Computer Science - Human-Computer Interaction</topic><toplevel>online_resources</toplevel><creatorcontrib>Feldhus, Nils</creatorcontrib><creatorcontrib>Wang, Qianli</creatorcontrib><creatorcontrib>Anikina, Tatiana</creatorcontrib><creatorcontrib>Chopra, Sahil</creatorcontrib><creatorcontrib>Oguz, Cennet</creatorcontrib><creatorcontrib>Möller, Sebastian</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Feldhus, Nils</au><au>Wang, Qianli</au><au>Anikina, Tatiana</au><au>Chopra, Sahil</au><au>Oguz, Cennet</au><au>Möller, Sebastian</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>InterroLang: Exploring NLP Models and Datasets through Dialogue-based Explanations</atitle><date>2023-10-09</date><risdate>2023</risdate><abstract>While recently developed NLP explainability methods let us open the black box in various ways (Madsen et al., 2022), a missing ingredient in this endeavor is an interactive tool offering a conversational interface. Such a dialogue system can help users explore datasets and models with explanations in a contextualized manner, e.g. via clarification or follow-up questions, and through a natural language interface. We adapt the conversational explanation framework TalkToModel (Slack et al., 2022) to the NLP domain, add new NLP-specific operations such as free-text rationalization, and illustrate its generalizability on three NLP tasks (dialogue act classification, question answering, hate speech detection). To recognize user queries for explanations, we evaluate fine-tuned and few-shot prompting models and implement a novel Adapter-based approach. We then conduct two user studies on (1) the perceived correctness and helpfulness of the dialogues, and (2) the simulatability, i.e. how objectively helpful dialogical explanations are for humans in figuring out the model's predicted label when it's not shown. We found rationalization and feature attribution were helpful in explaining the model behavior. Moreover, users could more reliably predict the model outcome based on an explanation dialogue rather than one-off explanations.</abstract><doi>10.48550/arxiv.2310.05592</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2310.05592
ispartof
issn
language eng
recordid cdi_arxiv_primary_2310_05592
source arXiv.org
subjects Computer Science - Artificial Intelligence
Computer Science - Computation and Language
Computer Science - Human-Computer Interaction
title InterroLang: Exploring NLP Models and Datasets through Dialogue-based Explanations
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-31T10%3A07%3A49IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=InterroLang:%20Exploring%20NLP%20Models%20and%20Datasets%20through%20Dialogue-based%20Explanations&rft.au=Feldhus,%20Nils&rft.date=2023-10-09&rft_id=info:doi/10.48550/arxiv.2310.05592&rft_dat=%3Carxiv_GOX%3E2310_05592%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true