Instruction Tuning with GPT-4
Prior work has shown that finetuning large language models (LLMs) using machine-generated instruction-following data enables such models to achieve remarkable zero-shot capabilities on new tasks, and no human-written instructions are needed. In this paper, we present the first attempt to use GPT-4 t...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Prior work has shown that finetuning large language models (LLMs) using
machine-generated instruction-following data enables such models to achieve
remarkable zero-shot capabilities on new tasks, and no human-written
instructions are needed. In this paper, we present the first attempt to use
GPT-4 to generate instruction-following data for LLM finetuning. Our early
experiments on instruction-tuned LLaMA models show that the 52K English and
Chinese instruction-following data generated by GPT-4 leads to superior
zero-shot performance on new tasks to the instruction-following data generated
by previous state-of-the-art models. We also collect feedback and comparison
data from GPT-4 to enable a comprehensive evaluation and reward model training.
We make our data generated using GPT-4 as well as our codebase publicly
available. |
---|---|
DOI: | 10.48550/arxiv.2304.03277 |