Inference Performance Optimization for Large Language Models on CPUs
Large language models (LLMs) have shown exceptional performance and vast potential across diverse tasks. However, the deployment of LLMs with high performance in low-resource environments has garnered significant attention in the industry. When GPU hardware resources are limited, we can explore alte...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Large language models (LLMs) have shown exceptional performance and vast
potential across diverse tasks. However, the deployment of LLMs with high
performance in low-resource environments has garnered significant attention in
the industry. When GPU hardware resources are limited, we can explore
alternative options on CPUs. To mitigate the financial burden and alleviate
constraints imposed by hardware resources, optimizing inference performance is
necessary. In this paper, we introduce an easily deployable inference
performance optimization solution aimed at accelerating LLMs on CPUs. In this
solution, we implement an effective way to reduce the KV cache size while
ensuring precision. We propose a distributed inference optimization approach
and implement it based on oneAPI Collective Communications Library.
Furthermore, we propose optimization approaches for LLMs on CPU, and conduct
tailored optimizations for the most commonly used models. The code is
open-sourced at https://github.com/intel/xFasterTransformer. |
---|---|
DOI: | 10.48550/arxiv.2407.07304 |