L3Ms -- Lagrange Large Language Models
Supervised fine-tuning (SFT) and alignment of large language models (LLMs) are key steps in providing a good user experience. However, the concept of an appropriate alignment is inherently application-dependent, and current methods often rely on heuristic choices to drive the optimization. In this w...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Supervised fine-tuning (SFT) and alignment of large language models (LLMs)
are key steps in providing a good user experience. However, the concept of an
appropriate alignment is inherently application-dependent, and current methods
often rely on heuristic choices to drive the optimization. In this work, we
formulate SFT and alignment as a constrained optimization problem, where the
LLM is trained on a task while being required to meet application-specific
requirements, without resorting to heuristics. To solve this, we propose
Lagrange Large Language Models (L3Ms), which employ logarithmic barriers to
enforce the constraints. This approach allows for the customization of L3Ms
across diverse applications while avoiding heuristic-driven processes. We
demonstrate experimentally the versatility and efficacy of L3Ms in achieving
tailored alignments for various applications. |
---|---|
DOI: | 10.48550/arxiv.2410.21533 |