Utilizing Large Language Models for Hyper Knowledge Graph Construction in Mine Hoist Fault Analysis

The rapid development of artificial intelligence technology is driving the intelligentization process across various fields, particularly in knowledge graph construction, where significant achievements have been made. However, research on hyper-relational knowledge graphs in the industrial domain re...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Symmetry (Basel) 2024-12, Vol.16 (12), p.1600
Hauptverfasser: Shu, Xiaoling, Dang, Xiaochao, Dong, Xiaohui, Li, Fenfang
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:The rapid development of artificial intelligence technology is driving the intelligentization process across various fields, particularly in knowledge graph construction, where significant achievements have been made. However, research on hyper-relational knowledge graphs in the industrial domain remains relatively weak. Traditional construction methods suffer from low automation, high cost, and poor reproducibility and portability. To address these challenges, this paper proposes an optimized construction process for a hyper-relational knowledge graph for mine hoist faults based on large language models. This process leverages the strengths of large language models and the logical connections of fault knowledge, employing GPT’s powerful reasoning abilities. A combined strategy of template-based and template-free prompts is designed to generate fault entities and relationships. To address potential data incompleteness caused by prompt engineering, link prediction is used to optimize the initial data generated by GPT o1-preview. We integrated the graph’s topological structure with domain-specific logical rules and applied the Variational EM algorithm for alternating optimization while also incorporating text embeddings to comprehensively enhance data optimization. Experimental results show that compared to the unoptimized MHSD, the optimized MHSD achieved a 0.008 improvement in MRR. Additionally, compared to the latest KICGPT, the optimized MHSD showed a 0.002 improvement in MRR. Finally, the optimized data were successfully imported into Neo4j for visualization.
ISSN:2073-8994
2073-8994
DOI:10.3390/sym16121600