Think-on-Graph: Deep and Responsible Reasoning of Large Language Model on Knowledge Graph
Although large language models (LLMs) have achieved significant success in various tasks, they often struggle with hallucination problems, especially in scenarios requiring deep and responsible reasoning. These issues could be partially addressed by introducing external knowledge graphs (KG) in LLM...
Gespeichert in:
Hauptverfasser: | , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Although large language models (LLMs) have achieved significant success in
various tasks, they often struggle with hallucination problems, especially in
scenarios requiring deep and responsible reasoning. These issues could be
partially addressed by introducing external knowledge graphs (KG) in LLM
reasoning. In this paper, we propose a new LLM-KG integrating paradigm
``$\hbox{LLM}\otimes\hbox{KG}$'' which treats the LLM as an agent to
interactively explore related entities and relations on KGs and perform
reasoning based on the retrieved knowledge. We further implement this paradigm
by introducing a new approach called Think-on-Graph (ToG), in which the LLM
agent iteratively executes beam search on KG, discovers the most promising
reasoning paths, and returns the most likely reasoning results. We use a number
of well-designed experiments to examine and illustrate the following advantages
of ToG: 1) compared with LLMs, ToG has better deep reasoning power; 2) ToG has
the ability of knowledge traceability and knowledge correctability by
leveraging LLMs reasoning and expert feedback; 3) ToG provides a flexible
plug-and-play framework for different LLMs, KGs and prompting strategies
without any additional training cost; 4) the performance of ToG with small LLM
models could exceed large LLM such as GPT-4 in certain scenarios and this
reduces the cost of LLM deployment and application. As a training-free method
with lower computational cost and better generality, ToG achieves overall SOTA
in 6 out of 9 datasets where most previous SOTAs rely on additional training. |
---|---|
DOI: | 10.48550/arxiv.2307.07697 |