KGValidator: A Framework for Automatic Validation of Knowledge Graph Construction
This study explores the use of Large Language Models (LLMs) for automatic evaluation of knowledge graph (KG) completion models. Historically, validating information in KGs has been a challenging task, requiring large-scale human annotation at prohibitive cost. With the emergence of general-purpose g...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | This study explores the use of Large Language Models (LLMs) for automatic
evaluation of knowledge graph (KG) completion models. Historically, validating
information in KGs has been a challenging task, requiring large-scale human
annotation at prohibitive cost. With the emergence of general-purpose
generative AI and LLMs, it is now plausible that human-in-the-loop validation
could be replaced by a generative agent. We introduce a framework for
consistency and validation when using generative models to validate knowledge
graphs. Our framework is based upon recent open-source developments for
structural and semantic validation of LLM outputs, and upon flexible approaches
to fact checking and verification, supported by the capacity to reference
external knowledge sources of any kind. The design is easy to adapt and extend,
and can be used to verify any kind of graph-structured data through a
combination of model-intrinsic knowledge, user-supplied context, and agents
capable of external knowledge retrieval. |
---|---|
DOI: | 10.48550/arxiv.2404.15923 |