Minor SFT loss for LLM fine-tune to increase performance and reduce model deviation
Instruct LLM provide a paradigm used in large scale language model to align LLM to human preference. The paradigm contains supervised fine tuning and reinforce learning from human feedback. This paradigm is also used in downstream scenarios to adapt LLM to specific corpora and applications. Comparin...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Instruct LLM provide a paradigm used in large scale language model to align
LLM to human preference. The paradigm contains supervised fine tuning and
reinforce learning from human feedback. This paradigm is also used in
downstream scenarios to adapt LLM to specific corpora and applications.
Comparing to SFT, there are many efforts focused on RLHF and several algorithms
being proposed, such as PPO, DPO, IPO, KTO, MinorDPO and etc. Meanwhile most
efforts for SFT are focused on how to collect, filter and mix high quality
data. In this article with insight from DPO and MinorDPO, we propose a training
metric for SFT to measure the discrepancy between the optimized model and the
original model, and a loss function MinorSFT that can increase the training
effectiveness, and reduce the discrepancy between the optimized LLM and
original LLM. |
---|---|
DOI: | 10.48550/arxiv.2408.10642 |