CREPE: Open-Domain Question Answering with False Presuppositions
Information seeking users often pose questions with false presuppositions, especially when asking about unfamiliar topics. Most existing question answering (QA) datasets, in contrast, assume all questions have well defined answers. We introduce CREPE, a QA dataset containing a natural distribution o...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Information seeking users often pose questions with false presuppositions,
especially when asking about unfamiliar topics. Most existing question
answering (QA) datasets, in contrast, assume all questions have well defined
answers. We introduce CREPE, a QA dataset containing a natural distribution of
presupposition failures from online information-seeking forums. We find that
25% of questions contain false presuppositions, and provide annotations for
these presuppositions and their corrections. Through extensive baseline
experiments, we show that adaptations of existing open-domain QA models can
find presuppositions moderately well, but struggle when predicting whether a
presupposition is factually correct. This is in large part due to difficulty in
retrieving relevant evidence passages from a large text corpus. CREPE provides
a benchmark to study question answering in the wild, and our analyses provide
avenues for future work in better modeling and further studying the task. |
---|---|
DOI: | 10.48550/arxiv.2211.17257 |