Take the essence and discard the dross: A Rethinking on Data Selection for Fine-Tuning Large Language Models
Data selection for fine-tuning Large Language Models (LLMs) aims to select a high-quality subset from a given candidate dataset to train a Pending Fine-tune Model (PFM) into a Selective-Enhanced Model (SEM). It can improve the model performance and accelerate the training process. Although a few sur...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Data selection for fine-tuning Large Language Models (LLMs) aims to select a
high-quality subset from a given candidate dataset to train a Pending Fine-tune
Model (PFM) into a Selective-Enhanced Model (SEM). It can improve the model
performance and accelerate the training process. Although a few surveys have
investigated related works of data selection, there is a lack of comprehensive
comparison between existing methods due to their various experimental settings.
To address this issue, we first propose a three-stage scheme for data selection
and comprehensively review existing works according to this scheme. Then, we
design a unified comparing method with ratio-based efficiency indicators and
ranking-based feasibility indicators to overcome the difficulty of comparing
various models with diverse experimental settings. After an in-depth
comparative analysis, we find that the more targeted method with data-specific
and model-specific quality labels has higher efficiency, but the introduction
of additional noise information should be avoided when designing selection
algorithms. Finally, we summarize the trends in data selection and highlight
the short-term and long-term challenges to guide future research. |
---|---|
DOI: | 10.48550/arxiv.2406.14115 |