Fine-tuned LLMs Know More, Hallucinate Less with Few-Shot Sequence-to-Sequence Semantic Parsing over Wikidata
While large language models (LLMs) can answer many questions correctly, they can also hallucinate and give wrong answers. Wikidata, with its over 12 billion facts, can be used to ground LLMs to improve their factuality. This paper presents WikiWebQuestions, a high-quality question answering benchmar...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | While large language models (LLMs) can answer many questions correctly, they
can also hallucinate and give wrong answers. Wikidata, with its over 12 billion
facts, can be used to ground LLMs to improve their factuality. This paper
presents WikiWebQuestions, a high-quality question answering benchmark for
Wikidata. Ported over from WebQuestions for Freebase, it consists of real-world
data with SPARQL annotation. This paper presents a few-shot
sequence-to-sequence semantic parser for Wikidata. We modify SPARQL to use the
unique domain and property names instead of their IDs. We train the parser to
use either the results from an entity linker or mentions in the query. We
fine-tune LLaMA by adding the few-shot training data to that used to fine-tune
Alpaca. Our experimental results demonstrate the effectiveness of this
methodology, establishing a strong baseline of 76% and 65% answer accuracy in
the dev and test sets of WikiWebQuestions, respectively. By pairing our
semantic parser with GPT-3, we combine verifiable results with qualified GPT-3
guesses to provide useful answers to 96% of the questions in dev. We also show
that our method outperforms the state-of-the-art for the QALD-7 Wikidata
dataset by 3.6% in F1 score. |
---|---|
DOI: | 10.48550/arxiv.2305.14202 |