Complex Reasoning over Logical Queries on Commonsense Knowledge Graphs
Event commonsense reasoning requires the ability to reason about the relationship between events, as well as infer implicit context underlying that relationship. However, data scarcity makes it challenging for language models to learn to generate commonsense inferences for contexts and questions inv...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Event commonsense reasoning requires the ability to reason about the
relationship between events, as well as infer implicit context underlying that
relationship. However, data scarcity makes it challenging for language models
to learn to generate commonsense inferences for contexts and questions
involving interactions between complex events. To address this demand, we
present COM2 (COMplex COMmonsense), a new dataset created by sampling multi-hop
logical queries (e.g., the joint effect or cause of both event A and B, or the
effect of the effect of event C) from an existing commonsense knowledge graph
(CSKG), and verbalizing them using handcrafted rules and large language models
into multiple-choice and text generation questions. Our experiments show that
language models trained on COM2 exhibit significant improvements in complex
reasoning ability, resulting in enhanced zero-shot performance in both
in-domain and out-of-domain tasks for question answering and generative
commonsense reasoning, without expensive human annotations. Code and data are
available at https://github.com/tqfang/complex-commonsense-reasoning. |
---|---|
DOI: | 10.48550/arxiv.2403.07398 |