GNNavi: Navigating the Information Flow in Large Language Models by Graph Neural Network
Large Language Models (LLMs) exhibit strong In-Context Learning (ICL) capabilities when prompts with demonstrations are used. However, fine-tuning still remains crucial to further enhance their adaptability. Prompt-based fine-tuning proves to be an effective fine-tuning method in low-data scenarios,...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Large Language Models (LLMs) exhibit strong In-Context Learning (ICL)
capabilities when prompts with demonstrations are used. However, fine-tuning
still remains crucial to further enhance their adaptability. Prompt-based
fine-tuning proves to be an effective fine-tuning method in low-data scenarios,
but high demands on computing resources limit its practicality. We address this
issue by introducing a prompt-based parameter-efficient fine-tuning (PEFT)
approach. GNNavi leverages insights into ICL's information flow dynamics, which
indicates that label words act in prompts as anchors for information
propagation. GNNavi employs a Graph Neural Network (GNN) layer to precisely
guide the aggregation and distribution of information flow during the
processing of prompts by hardwiring the desired information flow into the GNN.
Our experiments on text classification tasks with GPT-2 and Llama2 show GNNavi
surpasses standard prompt-based fine-tuning methods in few-shot settings by
updating just 0.2% to 0.5% of parameters. We compare GNNavi with prevalent PEFT
approaches, such as prefix tuning, LoRA and Adapter in terms of performance and
efficiency. Our analysis reveals that GNNavi enhances information flow and
ensures a clear aggregation process. |
---|---|
DOI: | 10.48550/arxiv.2402.11709 |