Optimization of Armv9 architecture general large language model inference performance based on Llama.cpp
This article optimizes the inference performance of the Qwen-1.8B model by performing Int8 quantization, vectorizing some operators in llama.cpp, and modifying the compilation script to improve the compiler optimization level. On the Yitian 710 experimental platform, the prefill performance is incre...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Chen, Longhao Zhao, Yina Xie, Qiangjun Sheng, Qinghua |
description | This article optimizes the inference performance of the Qwen-1.8B model by
performing Int8 quantization, vectorizing some operators in llama.cpp, and
modifying the compilation script to improve the compiler optimization level. On
the Yitian 710 experimental platform, the prefill performance is increased by
1.6 times, the decoding performance is increased by 24 times, the memory usage
is reduced to 1/5 of the original, and the accuracy loss is almost negligible. |
doi_str_mv | 10.48550/arxiv.2406.10816 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2406_10816</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2406_10816</sourcerecordid><originalsourceid>FETCH-LOGICAL-a676-a2a4249fd151e4b1b0029b7951130e0337b3b0c87c78fc154577994fa27a75153</originalsourceid><addsrcrecordid>eNotj71uwyAURlk6VGkfoFN5AbtgwJgxivonWcqS3brgi4MEtkWcqO3T10m7fN-ZjnQIeeKslI1S7AXyV7iUlWR1yVnD63ty3M9LSOEHljCNdPJ0m9PFUMjuGBZ0yzkjHXDEDJFGyAOuOw5nWCFNPUYaRo8ZR4d0xuynnODKFk7Y09XYRkhQunl-IHce4gkf_39DDm-vh91H0e7fP3fbtoBa1wVUICtpfM8VR2m5ZawyVhvFuWDIhNBWWOYa7XTjHVdSaW2M9FBp0IorsSHPf9pbajfnkCB_d9fk7pYsfgHxEFGh</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Optimization of Armv9 architecture general large language model inference performance based on Llama.cpp</title><source>arXiv.org</source><creator>Chen, Longhao ; Zhao, Yina ; Xie, Qiangjun ; Sheng, Qinghua</creator><creatorcontrib>Chen, Longhao ; Zhao, Yina ; Xie, Qiangjun ; Sheng, Qinghua</creatorcontrib><description>This article optimizes the inference performance of the Qwen-1.8B model by
performing Int8 quantization, vectorizing some operators in llama.cpp, and
modifying the compilation script to improve the compiler optimization level. On
the Yitian 710 experimental platform, the prefill performance is increased by
1.6 times, the decoding performance is increased by 24 times, the memory usage
is reduced to 1/5 of the original, and the accuracy loss is almost negligible.</description><identifier>DOI: 10.48550/arxiv.2406.10816</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Hardware Architecture ; Computer Science - Performance ; Computer Science - Programming Languages</subject><creationdate>2024-06</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2406.10816$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2406.10816$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Chen, Longhao</creatorcontrib><creatorcontrib>Zhao, Yina</creatorcontrib><creatorcontrib>Xie, Qiangjun</creatorcontrib><creatorcontrib>Sheng, Qinghua</creatorcontrib><title>Optimization of Armv9 architecture general large language model inference performance based on Llama.cpp</title><description>This article optimizes the inference performance of the Qwen-1.8B model by
performing Int8 quantization, vectorizing some operators in llama.cpp, and
modifying the compilation script to improve the compiler optimization level. On
the Yitian 710 experimental platform, the prefill performance is increased by
1.6 times, the decoding performance is increased by 24 times, the memory usage
is reduced to 1/5 of the original, and the accuracy loss is almost negligible.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Hardware Architecture</subject><subject>Computer Science - Performance</subject><subject>Computer Science - Programming Languages</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj71uwyAURlk6VGkfoFN5AbtgwJgxivonWcqS3brgi4MEtkWcqO3T10m7fN-ZjnQIeeKslI1S7AXyV7iUlWR1yVnD63ty3M9LSOEHljCNdPJ0m9PFUMjuGBZ0yzkjHXDEDJFGyAOuOw5nWCFNPUYaRo8ZR4d0xuynnODKFk7Y09XYRkhQunl-IHce4gkf_39DDm-vh91H0e7fP3fbtoBa1wVUICtpfM8VR2m5ZawyVhvFuWDIhNBWWOYa7XTjHVdSaW2M9FBp0IorsSHPf9pbajfnkCB_d9fk7pYsfgHxEFGh</recordid><startdate>20240616</startdate><enddate>20240616</enddate><creator>Chen, Longhao</creator><creator>Zhao, Yina</creator><creator>Xie, Qiangjun</creator><creator>Sheng, Qinghua</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240616</creationdate><title>Optimization of Armv9 architecture general large language model inference performance based on Llama.cpp</title><author>Chen, Longhao ; Zhao, Yina ; Xie, Qiangjun ; Sheng, Qinghua</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a676-a2a4249fd151e4b1b0029b7951130e0337b3b0c87c78fc154577994fa27a75153</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Hardware Architecture</topic><topic>Computer Science - Performance</topic><topic>Computer Science - Programming Languages</topic><toplevel>online_resources</toplevel><creatorcontrib>Chen, Longhao</creatorcontrib><creatorcontrib>Zhao, Yina</creatorcontrib><creatorcontrib>Xie, Qiangjun</creatorcontrib><creatorcontrib>Sheng, Qinghua</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Chen, Longhao</au><au>Zhao, Yina</au><au>Xie, Qiangjun</au><au>Sheng, Qinghua</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Optimization of Armv9 architecture general large language model inference performance based on Llama.cpp</atitle><date>2024-06-16</date><risdate>2024</risdate><abstract>This article optimizes the inference performance of the Qwen-1.8B model by
performing Int8 quantization, vectorizing some operators in llama.cpp, and
modifying the compilation script to improve the compiler optimization level. On
the Yitian 710 experimental platform, the prefill performance is increased by
1.6 times, the decoding performance is increased by 24 times, the memory usage
is reduced to 1/5 of the original, and the accuracy loss is almost negligible.</abstract><doi>10.48550/arxiv.2406.10816</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2406.10816 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2406_10816 |
source | arXiv.org |
subjects | Computer Science - Artificial Intelligence Computer Science - Hardware Architecture Computer Science - Performance Computer Science - Programming Languages |
title | Optimization of Armv9 architecture general large language model inference performance based on Llama.cpp |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-25T21%3A24%3A31IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Optimization%20of%20Armv9%20architecture%20general%20large%20language%20model%20inference%20performance%20based%20on%20Llama.cpp&rft.au=Chen,%20Longhao&rft.date=2024-06-16&rft_id=info:doi/10.48550/arxiv.2406.10816&rft_dat=%3Carxiv_GOX%3E2406_10816%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |