Rethinking the Instruction Quality: LIFT is What You Need
Instruction tuning, a specialized technique to enhance large language model (LLM) performance via instruction datasets, relies heavily on the quality of employed data. Existing quality improvement methods alter instruction data through dataset expansion or curation. However, the expansion method ris...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Instruction tuning, a specialized technique to enhance large language model
(LLM) performance via instruction datasets, relies heavily on the quality of
employed data. Existing quality improvement methods alter instruction data
through dataset expansion or curation. However, the expansion method risks data
redundancy, potentially compromising LLM performance, while the curation
approach confines the LLM's potential to the original dataset. Our aim is to
surpass the original data quality without encountering these shortcomings. To
achieve this, we propose LIFT (LLM Instruction Fusion Transfer), a novel and
versatile paradigm designed to elevate the instruction quality to new heights.
LIFT strategically broadens data distribution to encompass more high-quality
subspaces and eliminates redundancy, concentrating on high-quality segments
across overall data subspaces. Experimental results demonstrate that, even with
a limited quantity of high-quality instruction data selected by our paradigm,
LLMs not only consistently uphold robust performance across various tasks but
also surpass some state-of-the-art results, highlighting the significant
improvement in instruction quality achieved by our paradigm. |
---|---|
DOI: | 10.48550/arxiv.2312.11508 |