Active entailment encoding for explanation tree construction using parsimonious generation of hard negatives
Entailment trees have been proposed to simulate the human reasoning process of explanation generation in the context of open--domain textual question answering. However, in practice, manually constructing these explanation trees proves a laborious process that requires active human involvement. Give...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Entailment trees have been proposed to simulate the human reasoning process
of explanation generation in the context of open--domain textual question
answering. However, in practice, manually constructing these explanation trees
proves a laborious process that requires active human involvement. Given the
complexity of capturing the line of reasoning from question to the answer or
from claim to premises, the issue arises of how to assist the user in
efficiently constructing multi--level entailment trees given a large set of
available facts. In this paper, we frame the construction of entailment trees
as a sequence of active premise selection steps, i.e., for each intermediate
node in an explanation tree, the expert needs to annotate positive and negative
examples of premise facts from a large candidate list. We then iteratively
fine--tune pre--trained Transformer models with the resulting positive and
tightly controlled negative samples and aim to balance the encoding of semantic
relationships and explanatory entailment relationships. Experimental evaluation
confirms the measurable efficiency gains of the proposed active fine--tuning
method in facilitating entailment trees construction: up to 20\% improvement in
explanatory premise selection when compared against several alternatives. |
---|---|
DOI: | 10.48550/arxiv.2208.01376 |