Improving Conversational Abilities of Quantized Large Language Models via Direct Preference Alignment
The rapid advancement of large language models (LLMs) has facilitated their transformation into conversational chatbots that can grasp contextual nuances and generate pertinent sentences, closely mirroring human values through advanced techniques such as instruction tuning and reinforcement learning...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The rapid advancement of large language models (LLMs) has facilitated their
transformation into conversational chatbots that can grasp contextual nuances
and generate pertinent sentences, closely mirroring human values through
advanced techniques such as instruction tuning and reinforcement learning from
human feedback (RLHF). However, the computational efficiency required for LLMs,
achieved through techniques like post-training quantization (PTQ), presents
challenges such as token-flipping that can impair chatbot performance. In
response, we propose a novel preference alignment approach, quantization-aware
direct preference optimization (QDPO), that aligns quantized LLMs with their
full-precision counterparts, improving conversational abilities. Evaluated on
two instruction-tuned LLMs in various languages, QDPO demonstrated superior
performance in improving conversational abilities compared to established PTQ
and knowledge-distillation fine-tuning techniques, marking a significant step
forward in the development of efficient and effective conversational LLMs. |
---|---|
DOI: | 10.48550/arxiv.2407.03051 |