TAGCOS: Task-agnostic Gradient Clustered Coreset Selection for Instruction Tuning Data
Instruction tuning has achieved unprecedented success in NLP, turning large language models into versatile chatbots. However, the increasing variety and volume of instruction datasets demand significant computational resources. To address this, it is essential to extract a small and highly informati...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Instruction tuning has achieved unprecedented success in NLP, turning large
language models into versatile chatbots. However, the increasing variety and
volume of instruction datasets demand significant computational resources. To
address this, it is essential to extract a small and highly informative subset
(i.e., Coreset) that achieves comparable performance to the full dataset.
Achieving this goal poses non-trivial challenges: 1) data selection requires
accurate data representations that reflect the training samples' quality, 2)
considering the diverse nature of instruction datasets, and 3) ensuring the
efficiency of the coreset selection algorithm for large models. To address
these challenges, we propose Task-Agnostic Gradient Clustered COreset Selection
(TAGCOS). Specifically, we leverage sample gradients as the data
representations, perform clustering to group similar data, and apply an
efficient greedy algorithm for coreset selection. Experimental results show
that our algorithm, selecting only 5% of the data, surpasses other unsupervised
methods and achieves performance close to that of the full dataset. |
---|---|
DOI: | 10.48550/arxiv.2407.15235 |