TIARA: Multi-grained Retrieval for Robust Question Answering over Large Knowledge Bases
Pre-trained language models (PLMs) have shown their effectiveness in multiple scenarios. However, KBQA remains challenging, especially regarding coverage and generalization settings. This is due to two main factors: i) understanding the semantics of both questions and relevant knowledge from the KB;...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Pre-trained language models (PLMs) have shown their effectiveness in multiple
scenarios. However, KBQA remains challenging, especially regarding coverage and
generalization settings. This is due to two main factors: i) understanding the
semantics of both questions and relevant knowledge from the KB; ii) generating
executable logical forms with both semantic and syntactic correctness. In this
paper, we present a new KBQA model, TIARA, which addresses those issues by
applying multi-grained retrieval to help the PLM focus on the most relevant KB
contexts, viz., entities, exemplary logical forms, and schema items. Moreover,
constrained decoding is used to control the output space and reduce generation
errors. Experiments over important benchmarks demonstrate the effectiveness of
our approach. TIARA outperforms previous SOTA, including those using PLMs or
oracle entity annotations, by at least 4.1 and 1.1 F1 points on GrailQA and
WebQuestionsSP, respectively. |
---|---|
DOI: | 10.48550/arxiv.2210.12925 |