Event-Centric Question Answering via Contrastive Learning and Invertible Event Transformation
Human reading comprehension often requires reasoning of event semantic relations in narratives, represented by Event-centric Question-Answering (QA). To address event-centric QA, we propose a novel QA model with contrastive learning and invertible event transformation, call TranCLR. Our proposed mod...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Human reading comprehension often requires reasoning of event semantic
relations in narratives, represented by Event-centric Question-Answering (QA).
To address event-centric QA, we propose a novel QA model with contrastive
learning and invertible event transformation, call TranCLR. Our proposed model
utilizes an invertible transformation matrix to project semantic vectors of
events into a common event embedding space, trained with contrastive learning,
and thus naturally inject event semantic knowledge into mainstream QA
pipelines. The transformation matrix is fine-tuned with the annotated event
relation types between events that occurred in questions and those in answers,
using event-aware question vectors. Experimental results on the Event Semantic
Relation Reasoning (ESTER) dataset show significant improvements in both
generative and extractive settings compared to the existing strong baselines,
achieving over 8.4% gain in the token-level F1 score and 3.0% gain in Exact
Match (EM) score under the multi-answer setting. Qualitative analysis reveals
the high quality of the generated answers by TranCLR, demonstrating the
feasibility of injecting event knowledge into QA model learning. Our code and
models can be found at https://github.com/LuJunru/TranCLR. |
---|---|
DOI: | 10.48550/arxiv.2210.12902 |