Make a Choice! Knowledge Base Question Answering with In-Context Learning
Question answering over knowledge bases (KBQA) aims to answer factoid questions with a given knowledge base (KB). Due to the large scale of KB, annotated data is impossible to cover all fact schemas in KB, which poses a challenge to the generalization ability of methods that require a sufficient amo...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Question answering over knowledge bases (KBQA) aims to answer factoid
questions with a given knowledge base (KB). Due to the large scale of KB,
annotated data is impossible to cover all fact schemas in KB, which poses a
challenge to the generalization ability of methods that require a sufficient
amount of annotated data. Recently, LLMs have shown strong few-shot performance
in many NLP tasks. We expect LLM can help existing methods improve their
generalization ability, especially in low-resource situations. In this paper,
we present McL-KBQA, a framework that incorporates the few-shot ability of LLM
into the KBQA method via ICL-based multiple choice and then improves the
effectiveness of the QA tasks. Experimental results on two KBQA datasets
demonstrate the competitive performance of McL-KBQA with strong improvements in
generalization. We expect to explore a new way to QA tasks from KBQA in
conjunction with LLM, how to generate answers normatively and correctly with
strong generalization. |
---|---|
DOI: | 10.48550/arxiv.2305.13972 |