Investigating Instruction Tuning Large Language Models on Graphs
Inspired by the recent advancements of Large Language Models (LLMs) in NLP tasks, there's growing interest in applying LLMs to graph-related tasks. This study delves into the capabilities of instruction-following LLMs for engaging with real-world graphs, aiming to offer empirical insights into...
Gespeichert in:
Hauptverfasser: | , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Inspired by the recent advancements of Large Language Models (LLMs) in NLP
tasks, there's growing interest in applying LLMs to graph-related tasks. This
study delves into the capabilities of instruction-following LLMs for engaging
with real-world graphs, aiming to offer empirical insights into how LLMs can
effectively interact with graphs and generalize across graph tasks. We begin by
constructing a dataset designed for instruction tuning, which comprises a
diverse collection of 79 graph-related tasks from academic and e-commerce
domains, featuring 44,240 training instances and 18,960 test samples. Utilizing
this benchmark, our initial investigation focuses on identifying the optimal
graph representation that serves as a conduit for LLMs to understand complex
graph structures. Our findings indicate that JSON format for graph
representation consistently outperforms natural language and code formats
across various LLMs and graph types. Furthermore, we examine the key factors
that influence the generalization abilities of instruction-tuned LLMs by
evaluating their performance on both in-domain and out-of-domain graph tasks. |
---|---|
DOI: | 10.48550/arxiv.2408.05457 |