DPO Kernels: A Semantically-Aware, Kernel-Enhanced, and Divergence-Rich Paradigm for Direct Preference Optimization
The rapid rise of large language models (LLMs) has unlocked many applications but also underscores the challenge of aligning them with diverse values and preferences. Direct Preference Optimization (DPO) is central to alignment but constrained by fixed divergences and limited feature transformations...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The rapid rise of large language models (LLMs) has unlocked many applications
but also underscores the challenge of aligning them with diverse values and
preferences. Direct Preference Optimization (DPO) is central to alignment but
constrained by fixed divergences and limited feature transformations. We
propose DPO-Kernels, which integrates kernel methods to address these issues
through four key contributions: (i) Kernelized Representations with polynomial,
RBF, Mahalanobis, and spectral kernels for richer transformations, plus a
hybrid loss combining embedding-based and probability-based objectives; (ii)
Divergence Alternatives (Jensen-Shannon, Hellinger, Renyi, Bhattacharyya,
Wasserstein, and f-divergences) for greater stability; (iii) Data-Driven
Selection metrics that automatically choose the best kernel-divergence pair;
and (iv) a Hierarchical Mixture of Kernels for both local precision and global
modeling. Evaluations on 12 datasets demonstrate state-of-the-art performance
in factuality, safety, reasoning, and instruction following. Grounded in
Heavy-Tailed Self-Regularization, DPO-Kernels maintains robust generalization
for LLMs, offering a comprehensive resource for further alignment research. |
---|---|
DOI: | 10.48550/arxiv.2501.03271 |