GraphGPT: Graph Instruction Tuning for Large Language Models
Graph Neural Networks (GNNs) have evolved to understand graph structures through recursive exchanges and aggregations among nodes. To enhance robustness, self-supervised learning (SSL) has become a vital tool for data augmentation. Traditional methods often depend on fine-tuning with task-specific l...
Gespeichert in:
Hauptverfasser: | , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Graph Neural Networks (GNNs) have evolved to understand graph structures
through recursive exchanges and aggregations among nodes. To enhance
robustness, self-supervised learning (SSL) has become a vital tool for data
augmentation. Traditional methods often depend on fine-tuning with
task-specific labels, limiting their effectiveness when labeled data is scarce.
Our research tackles this by advancing graph model generalization in zero-shot
learning environments. Inspired by the success of large language models (LLMs),
we aim to create a graph-oriented LLM capable of exceptional generalization
across various datasets and tasks without relying on downstream graph data. We
introduce the GraphGPT framework, which integrates LLMs with graph structural
knowledge through graph instruction tuning. This framework includes a
text-graph grounding component to link textual and graph structures and a
dual-stage instruction tuning approach with a lightweight graph-text alignment
projector. These innovations allow LLMs to comprehend complex graph structures
and enhance adaptability across diverse datasets and tasks. Our framework
demonstrates superior generalization in both supervised and zero-shot graph
learning tasks, surpassing existing benchmarks. The open-sourced model
implementation of our GraphGPT is available at
https://github.com/HKUDS/GraphGPT. |
---|---|
DOI: | 10.48550/arxiv.2310.13023 |