Research on Chinese Semantic Relation Extraction in Marine Engine Rooms Based on Multi-feature Fusion

This research addresses the challenges of weak dependency relation learning caused by excessive contextual distances between segmented words in extracting entity relations within marine engine room knowledge graphs. Specifically, we have developed an advanced model that integrates an enhanced Chines...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE access 2024-01, Vol.12, p.1-1
Hauptverfasser: Liu, Xicai, Wang, Zhengquan, Wang, Fubo
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:This research addresses the challenges of weak dependency relation learning caused by excessive contextual distances between segmented words in extracting entity relations within marine engine room knowledge graphs. Specifically, we have developed an advanced model that integrates an enhanced Chinese syntactic structure with multi-feature fusion for extracting Chinese semantic relationships in engine rooms. Firstly, we construct a relational structure graph using a syntactic dependency tree based on the dependency relations among segmented words. This graph is then transformed into a character adjacency matrix. To incorporate syntactic graph structural features, we process this matrix and BERT-encoded embeddings using a graph convolutional neural net-work (GCN). Moreover, we utilize an attention mechanism to combine the syntactic graph structural features with the context features extracted by BERT, resulting in a multi-feature fused representation. Finally, we utilize this representation to extract entity relations by training a relation selector through reinforcement learning to optimize the relation embeddings and enhance the accuracy of relation judgment. Experimental results on a Chinese semantic dataset for marine engine rooms show that our model achieves an F1 score of 87.64%, outperforming several baseline models. These findings highlight that the fusion of semantic and syntactic structural features enhances the model's informational content and interpretative capabilities.
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2024.3518614