Optimization of Armv9 architecture general large language model inference performance based on Llama.cpp
This article optimizes the inference performance of the Qwen-1.8B model by performing Int8 quantization, vectorizing some operators in llama.cpp, and modifying the compilation script to improve the compiler optimization level. On the Yitian 710 experimental platform, the prefill performance is incre...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | This article optimizes the inference performance of the Qwen-1.8B model by
performing Int8 quantization, vectorizing some operators in llama.cpp, and
modifying the compilation script to improve the compiler optimization level. On
the Yitian 710 experimental platform, the prefill performance is increased by
1.6 times, the decoding performance is increased by 24 times, the memory usage
is reduced to 1/5 of the original, and the accuracy loss is almost negligible. |
---|---|
DOI: | 10.48550/arxiv.2406.10816 |