Self-Evolution Fine-Tuning for Policy Optimization
The alignment of large language models (LLMs) is crucial not only for unlocking their potential in specific tasks but also for ensuring that responses meet human expectations and adhere to safety and ethical principles. Current alignment methodologies face considerable challenges. For instance, supe...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The alignment of large language models (LLMs) is crucial not only for
unlocking their potential in specific tasks but also for ensuring that
responses meet human expectations and adhere to safety and ethical principles.
Current alignment methodologies face considerable challenges. For instance,
supervised fine-tuning (SFT) requires extensive, high-quality annotated
samples, while reinforcement learning from human feedback (RLHF) is complex and
often unstable. In this paper, we introduce self-evolution fine-tuning (SEFT)
for policy optimization, with the aim of eliminating the need for annotated
samples while retaining the stability and efficiency of SFT. SEFT first trains
an adaptive reviser to elevate low-quality responses while maintaining
high-quality ones. The reviser then gradually guides the policy's optimization
by fine-tuning it with enhanced responses. One of the prominent features of
this method is its ability to leverage unlimited amounts of unannotated data
for policy optimization through supervised fine-tuning. Our experiments on
AlpacaEval 2.0 and MT-Bench demonstrate the effectiveness of SEFT. We also
provide a comprehensive analysis of its advantages over existing alignment
techniques. |
---|---|
DOI: | 10.48550/arxiv.2406.10813 |