Supervised Knowledge Makes Large Language Models Better In-context Learners
Large Language Models (LLMs) exhibit emerging in-context learning abilities through prompt engineering. The recent progress in large-scale generative models has further expanded their use in real-world language applications. However, the critical challenge of improving the generalizability and factu...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Large Language Models (LLMs) exhibit emerging in-context learning abilities
through prompt engineering. The recent progress in large-scale generative
models has further expanded their use in real-world language applications.
However, the critical challenge of improving the generalizability and
factuality of LLMs in natural language understanding and question answering
remains under-explored. While previous in-context learning research has focused
on enhancing models to adhere to users' specific instructions and quality
expectations, and to avoid undesired outputs, little to no work has explored
the use of task-Specific fine-tuned Language Models (SLMs) to improve LLMs'
in-context learning during the inference stage. Our primary contribution is the
establishment of a simple yet effective framework that enhances the reliability
of LLMs as it: 1) generalizes out-of-distribution data, 2) elucidates how LLMs
benefit from discriminative models, and 3) minimizes hallucinations in
generative tasks. Using our proposed plug-in method, enhanced versions of Llama
2 and ChatGPT surpass their original versions regarding generalizability and
factuality. We offer a comprehensive suite of resources, including 16 curated
datasets, prompts, model checkpoints, and LLM outputs across 9 distinct tasks.
The code and data are released at:
https://github.com/YangLinyi/Supervised-Knowledge-Makes-Large-Language-Models-Better-In-context-Learners.
Our empirical analysis sheds light on the advantages of incorporating
discriminative models into LLMs and highlights the potential of our methodology
in fostering more reliable LLMs. |
---|---|
DOI: | 10.48550/arxiv.2312.15918 |