Inference Performance Optimization for Large Language Models on CPUs

Large language models (LLMs) have shown exceptional performance and vast potential across diverse tasks. However, the deployment of LLMs with high performance in low-resource environments has garnered significant attention in the industry. When GPU hardware resources are limited, we can explore alte...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: He, Pujiang, Zhou, Shan, Huang, Wenhuan, Li, Changqing, Wang, Duyi, Guo, Bin, Meng, Chen, Gui, Sheng, Yu, Weifei, Xie, Yi
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator He, Pujiang
Zhou, Shan
Huang, Wenhuan
Li, Changqing
Wang, Duyi
Guo, Bin
Meng, Chen
Gui, Sheng
Yu, Weifei
Xie, Yi
description Large language models (LLMs) have shown exceptional performance and vast potential across diverse tasks. However, the deployment of LLMs with high performance in low-resource environments has garnered significant attention in the industry. When GPU hardware resources are limited, we can explore alternative options on CPUs. To mitigate the financial burden and alleviate constraints imposed by hardware resources, optimizing inference performance is necessary. In this paper, we introduce an easily deployable inference performance optimization solution aimed at accelerating LLMs on CPUs. In this solution, we implement an effective way to reduce the KV cache size while ensuring precision. We propose a distributed inference optimization approach and implement it based on oneAPI Collective Communications Library. Furthermore, we propose optimization approaches for LLMs on CPU, and conduct tailored optimizations for the most commonly used models. The code is open-sourced at https://github.com/intel/xFasterTransformer.
doi_str_mv 10.48550/arxiv.2407.07304
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2407_07304</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2407_07304</sourcerecordid><originalsourceid>FETCH-arxiv_primary_2407_073043</originalsourceid><addsrcrecordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMjEw1zMwNzYw4WRw8cxLSy1KzUtOVQhILUrLL8pNBLH9C0oyczOrEksy8_MUgKIKPolF6alAMi-9NBHI8M1PSc0pVgBKOgeEFvMwsKYl5hSn8kJpbgZ5N9cQZw9dsH3xBUWZuYlFlfEge-PB9hoTVgEA1WE3ww</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Inference Performance Optimization for Large Language Models on CPUs</title><source>arXiv.org</source><creator>He, Pujiang ; Zhou, Shan ; Huang, Wenhuan ; Li, Changqing ; Wang, Duyi ; Guo, Bin ; Meng, Chen ; Gui, Sheng ; Yu, Weifei ; Xie, Yi</creator><creatorcontrib>He, Pujiang ; Zhou, Shan ; Huang, Wenhuan ; Li, Changqing ; Wang, Duyi ; Guo, Bin ; Meng, Chen ; Gui, Sheng ; Yu, Weifei ; Xie, Yi</creatorcontrib><description>Large language models (LLMs) have shown exceptional performance and vast potential across diverse tasks. However, the deployment of LLMs with high performance in low-resource environments has garnered significant attention in the industry. When GPU hardware resources are limited, we can explore alternative options on CPUs. To mitigate the financial burden and alleviate constraints imposed by hardware resources, optimizing inference performance is necessary. In this paper, we introduce an easily deployable inference performance optimization solution aimed at accelerating LLMs on CPUs. In this solution, we implement an effective way to reduce the KV cache size while ensuring precision. We propose a distributed inference optimization approach and implement it based on oneAPI Collective Communications Library. Furthermore, we propose optimization approaches for LLMs on CPU, and conduct tailored optimizations for the most commonly used models. The code is open-sourced at https://github.com/intel/xFasterTransformer.</description><identifier>DOI: 10.48550/arxiv.2407.07304</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence</subject><creationdate>2024-07</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2407.07304$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2407.07304$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>He, Pujiang</creatorcontrib><creatorcontrib>Zhou, Shan</creatorcontrib><creatorcontrib>Huang, Wenhuan</creatorcontrib><creatorcontrib>Li, Changqing</creatorcontrib><creatorcontrib>Wang, Duyi</creatorcontrib><creatorcontrib>Guo, Bin</creatorcontrib><creatorcontrib>Meng, Chen</creatorcontrib><creatorcontrib>Gui, Sheng</creatorcontrib><creatorcontrib>Yu, Weifei</creatorcontrib><creatorcontrib>Xie, Yi</creatorcontrib><title>Inference Performance Optimization for Large Language Models on CPUs</title><description>Large language models (LLMs) have shown exceptional performance and vast potential across diverse tasks. However, the deployment of LLMs with high performance in low-resource environments has garnered significant attention in the industry. When GPU hardware resources are limited, we can explore alternative options on CPUs. To mitigate the financial burden and alleviate constraints imposed by hardware resources, optimizing inference performance is necessary. In this paper, we introduce an easily deployable inference performance optimization solution aimed at accelerating LLMs on CPUs. In this solution, we implement an effective way to reduce the KV cache size while ensuring precision. We propose a distributed inference optimization approach and implement it based on oneAPI Collective Communications Library. Furthermore, we propose optimization approaches for LLMs on CPU, and conduct tailored optimizations for the most commonly used models. The code is open-sourced at https://github.com/intel/xFasterTransformer.</description><subject>Computer Science - Artificial Intelligence</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMjEw1zMwNzYw4WRw8cxLSy1KzUtOVQhILUrLL8pNBLH9C0oyczOrEksy8_MUgKIKPolF6alAMi-9NBHI8M1PSc0pVgBKOgeEFvMwsKYl5hSn8kJpbgZ5N9cQZw9dsH3xBUWZuYlFlfEge-PB9hoTVgEA1WE3ww</recordid><startdate>20240709</startdate><enddate>20240709</enddate><creator>He, Pujiang</creator><creator>Zhou, Shan</creator><creator>Huang, Wenhuan</creator><creator>Li, Changqing</creator><creator>Wang, Duyi</creator><creator>Guo, Bin</creator><creator>Meng, Chen</creator><creator>Gui, Sheng</creator><creator>Yu, Weifei</creator><creator>Xie, Yi</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240709</creationdate><title>Inference Performance Optimization for Large Language Models on CPUs</title><author>He, Pujiang ; Zhou, Shan ; Huang, Wenhuan ; Li, Changqing ; Wang, Duyi ; Guo, Bin ; Meng, Chen ; Gui, Sheng ; Yu, Weifei ; Xie, Yi</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-arxiv_primary_2407_073043</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Artificial Intelligence</topic><toplevel>online_resources</toplevel><creatorcontrib>He, Pujiang</creatorcontrib><creatorcontrib>Zhou, Shan</creatorcontrib><creatorcontrib>Huang, Wenhuan</creatorcontrib><creatorcontrib>Li, Changqing</creatorcontrib><creatorcontrib>Wang, Duyi</creatorcontrib><creatorcontrib>Guo, Bin</creatorcontrib><creatorcontrib>Meng, Chen</creatorcontrib><creatorcontrib>Gui, Sheng</creatorcontrib><creatorcontrib>Yu, Weifei</creatorcontrib><creatorcontrib>Xie, Yi</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>He, Pujiang</au><au>Zhou, Shan</au><au>Huang, Wenhuan</au><au>Li, Changqing</au><au>Wang, Duyi</au><au>Guo, Bin</au><au>Meng, Chen</au><au>Gui, Sheng</au><au>Yu, Weifei</au><au>Xie, Yi</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Inference Performance Optimization for Large Language Models on CPUs</atitle><date>2024-07-09</date><risdate>2024</risdate><abstract>Large language models (LLMs) have shown exceptional performance and vast potential across diverse tasks. However, the deployment of LLMs with high performance in low-resource environments has garnered significant attention in the industry. When GPU hardware resources are limited, we can explore alternative options on CPUs. To mitigate the financial burden and alleviate constraints imposed by hardware resources, optimizing inference performance is necessary. In this paper, we introduce an easily deployable inference performance optimization solution aimed at accelerating LLMs on CPUs. In this solution, we implement an effective way to reduce the KV cache size while ensuring precision. We propose a distributed inference optimization approach and implement it based on oneAPI Collective Communications Library. Furthermore, we propose optimization approaches for LLMs on CPU, and conduct tailored optimizations for the most commonly used models. The code is open-sourced at https://github.com/intel/xFasterTransformer.</abstract><doi>10.48550/arxiv.2407.07304</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2407.07304
ispartof
issn
language eng
recordid cdi_arxiv_primary_2407_07304
source arXiv.org
subjects Computer Science - Artificial Intelligence
title Inference Performance Optimization for Large Language Models on CPUs
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-12T12%3A46%3A09IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Inference%20Performance%20Optimization%20for%20Large%20Language%20Models%20on%20CPUs&rft.au=He,%20Pujiang&rft.date=2024-07-09&rft_id=info:doi/10.48550/arxiv.2407.07304&rft_dat=%3Carxiv_GOX%3E2407_07304%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true