Mixed Distillation Helps Smaller Language Model Better Reasoning
While large language models (LLMs) have demonstrated exceptional performance in recent natural language processing (NLP) tasks, their deployment poses substantial challenges due to high computational and memory demands in real-world applications. Recent studies have focused on enhancing smaller mode...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | While large language models (LLMs) have demonstrated exceptional performance
in recent natural language processing (NLP) tasks, their deployment poses
substantial challenges due to high computational and memory demands in
real-world applications. Recent studies have focused on enhancing smaller
models through knowledge distillation from LLMs, yielding promising results.
However, these models often struggle to match the performance of LLMs,
especially in tasks that require reasoning. In this work, we introduce Mixed
Distillation (MD) framework, which capitalizes on the strengths of Program of
Thought (PoT) and Chain of Thought (CoT) capabilities within LLMs, combining
multiple prompting techniques and distilling these capabilities into smaller
models. Our experimental results show that MD significantly enhances the
single-path and multi-path reasoning ability of smaller models in various
tasks. In terms of accuracy and generality of reasoning tasks, the model
generated by it exceeds the comprehensive performance of two individually
distilled models. Notably, LLaMA2-7B and CodeLlama-7B using MD achieved
remarkable improvements of (84.5%) and (85.5%), respectively, outperforming
GPT-3.5-Turbo by (2.5%) and (3.5%), on the SVAMP benchmark. |
---|---|
DOI: | 10.48550/arxiv.2312.10730 |