Causality extraction model based on two-stage GCN

As one of the indirect causality, cascaded causality can be used to construct the event knowledge graph, causal inference, scenario analysis, etc. The existing GCN methods lack the mining of context information and relevant entity information, resulting in the poor ability of causality inference, wh...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Soft computing (Berlin, Germany) Germany), 2022-12, Vol.26 (24), p.13815-13828
Hauptverfasser: Zhu, Guangli, Sun, Zhengyan, Zhang, Shunxiang, Wei, Subo, Li, KuanChing
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:As one of the indirect causality, cascaded causality can be used to construct the event knowledge graph, causal inference, scenario analysis, etc. The existing GCN methods lack the mining of context information and relevant entity information, resulting in the poor ability of causality inference, which inevitably affects the extraction accuracy of cascade causality. To solve this problem, this paper proposes a causality extraction model based on a two-stage GCN to improve the extraction accuracy. To obtain rich features of entities, this work combines sentiment polarity and knowledge base to get the causality candidate entity library. Firstly, the BERT model is pre-trained using context information and relevant entity information extracted from the entity library to obtain the final entity nodes. Secondly, using the semantic dependency graph, each possible edge between any two entity nodes can be obtained, which are input into the first stage GCN to get a preliminary directed graph of causality. Finally, the directed graph of causality is input into the second stage GCN to achieve deep causality multi-hop inference. Thus, the cascade causality is inferred and extracted by the two-stage GCN model. Experiments show that the extraction accuracy of cascade causality has been further improved.
ISSN:1432-7643
1433-7479
DOI:10.1007/s00500-022-07370-8