FlashDecoding++: Faster Large Language Model Inference on GPUs

As the Large Language Model (LLM) becomes increasingly important in various domains. However, the following challenges still remain unsolved in accelerating LLM inference: (1) Synchronized partial softmax update. The softmax operation requires a synchronized update operation among each partial softm...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2023-11
Hauptverfasser: Hong, Ke, Dai, Guohao, Xu, Jiaming, Mao, Qiuli, Li, Xiuhong, Liu, Jun, Chen, Kangdi, Dong, Yuhan, Wang, Yu
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Hong, Ke
Dai, Guohao
Xu, Jiaming
Mao, Qiuli
Li, Xiuhong
Liu, Jun
Chen, Kangdi
Dong, Yuhan
Wang, Yu
description As the Large Language Model (LLM) becomes increasingly important in various domains. However, the following challenges still remain unsolved in accelerating LLM inference: (1) Synchronized partial softmax update. The softmax operation requires a synchronized update operation among each partial softmax result, leading to ~20% overheads for the attention computation in LLMs. (2) Under-utilized computation of flat GEMM. The shape of matrices performing GEMM in LLM inference is flat, leading to under-utilized computation and >50% performance loss after padding zeros in previous designs. (3) Performance loss due to static dataflow. Kernel performance in LLM depends on varied input data features, hardware configurations, etc. A single and static dataflow may lead to a 50.25% performance loss for GEMMs of different shapes in LLM inference. We present FlashDecoding++, a fast LLM inference engine supporting mainstream LLMs and hardware back-ends. To tackle the above challenges, FlashDecoding++ creatively proposes: (1) Asynchronized softmax with unified max value. FlashDecoding++ introduces a unified max value technique for different partial softmax computations to avoid synchronization. (2) Flat GEMM optimization with double buffering. FlashDecoding++ points out that flat GEMMs with different shapes face varied bottlenecks. Then, techniques like double buffering are introduced. (3) Heuristic dataflow with hardware resource adaptation. FlashDecoding++ heuristically optimizes dataflow using different hardware resource considering input dynamics. Due to the versatility of optimizations in FlashDecoding++, FlashDecoding++ can achieve up to 4.86x and 2.18x speedup on both NVIDIA and AMD GPUs compared to Hugging Face implementations. FlashDecoding++ also achieves an average speedup of 1.37x compared to state-of-the-art LLM inference engines on mainstream LLMs.
format Article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2885674839</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2885674839</sourcerecordid><originalsourceid>FETCH-proquest_journals_28856748393</originalsourceid><addsrcrecordid>eNpjYuA0MjY21LUwMTLiYOAtLs4yMDAwMjM3MjU15mSwc8tJLM5wSU3OT8nMS9fWtlJwSywuSS1S8EksSk8FknnppYlAhm9-SmqOgmdeWmpRal5yqkJ-noJ7QGgxDwNrWmJOcSovlOZmUHZzDXH20C0oyi8sTS0uic_KLy3KA0rFG1lYmJqZm1gYWxoTpwoAAVE25g</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2885674839</pqid></control><display><type>article</type><title>FlashDecoding++: Faster Large Language Model Inference on GPUs</title><source>Free E- Journals</source><creator>Hong, Ke ; Dai, Guohao ; Xu, Jiaming ; Mao, Qiuli ; Li, Xiuhong ; Liu, Jun ; Chen, Kangdi ; Dong, Yuhan ; Wang, Yu</creator><creatorcontrib>Hong, Ke ; Dai, Guohao ; Xu, Jiaming ; Mao, Qiuli ; Li, Xiuhong ; Liu, Jun ; Chen, Kangdi ; Dong, Yuhan ; Wang, Yu</creatorcontrib><description>As the Large Language Model (LLM) becomes increasingly important in various domains. However, the following challenges still remain unsolved in accelerating LLM inference: (1) Synchronized partial softmax update. The softmax operation requires a synchronized update operation among each partial softmax result, leading to ~20% overheads for the attention computation in LLMs. (2) Under-utilized computation of flat GEMM. The shape of matrices performing GEMM in LLM inference is flat, leading to under-utilized computation and &gt;50% performance loss after padding zeros in previous designs. (3) Performance loss due to static dataflow. Kernel performance in LLM depends on varied input data features, hardware configurations, etc. A single and static dataflow may lead to a 50.25% performance loss for GEMMs of different shapes in LLM inference. We present FlashDecoding++, a fast LLM inference engine supporting mainstream LLMs and hardware back-ends. To tackle the above challenges, FlashDecoding++ creatively proposes: (1) Asynchronized softmax with unified max value. FlashDecoding++ introduces a unified max value technique for different partial softmax computations to avoid synchronization. (2) Flat GEMM optimization with double buffering. FlashDecoding++ points out that flat GEMMs with different shapes face varied bottlenecks. Then, techniques like double buffering are introduced. (3) Heuristic dataflow with hardware resource adaptation. FlashDecoding++ heuristically optimizes dataflow using different hardware resource considering input dynamics. Due to the versatility of optimizations in FlashDecoding++, FlashDecoding++ can achieve up to 4.86x and 2.18x speedup on both NVIDIA and AMD GPUs compared to Hugging Face implementations. FlashDecoding++ also achieves an average speedup of 1.37x compared to state-of-the-art LLM inference engines on mainstream LLMs.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Buffers ; Computation ; Dataflow kernels ; Hardware ; Inference ; Large language models ; Optimization ; Synchronism</subject><ispartof>arXiv.org, 2023-11</ispartof><rights>2023. This work is published under http://creativecommons.org/licenses/by-nc-sa/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>780,784</link.rule.ids></links><search><creatorcontrib>Hong, Ke</creatorcontrib><creatorcontrib>Dai, Guohao</creatorcontrib><creatorcontrib>Xu, Jiaming</creatorcontrib><creatorcontrib>Mao, Qiuli</creatorcontrib><creatorcontrib>Li, Xiuhong</creatorcontrib><creatorcontrib>Liu, Jun</creatorcontrib><creatorcontrib>Chen, Kangdi</creatorcontrib><creatorcontrib>Dong, Yuhan</creatorcontrib><creatorcontrib>Wang, Yu</creatorcontrib><title>FlashDecoding++: Faster Large Language Model Inference on GPUs</title><title>arXiv.org</title><description>As the Large Language Model (LLM) becomes increasingly important in various domains. However, the following challenges still remain unsolved in accelerating LLM inference: (1) Synchronized partial softmax update. The softmax operation requires a synchronized update operation among each partial softmax result, leading to ~20% overheads for the attention computation in LLMs. (2) Under-utilized computation of flat GEMM. The shape of matrices performing GEMM in LLM inference is flat, leading to under-utilized computation and &gt;50% performance loss after padding zeros in previous designs. (3) Performance loss due to static dataflow. Kernel performance in LLM depends on varied input data features, hardware configurations, etc. A single and static dataflow may lead to a 50.25% performance loss for GEMMs of different shapes in LLM inference. We present FlashDecoding++, a fast LLM inference engine supporting mainstream LLMs and hardware back-ends. To tackle the above challenges, FlashDecoding++ creatively proposes: (1) Asynchronized softmax with unified max value. FlashDecoding++ introduces a unified max value technique for different partial softmax computations to avoid synchronization. (2) Flat GEMM optimization with double buffering. FlashDecoding++ points out that flat GEMMs with different shapes face varied bottlenecks. Then, techniques like double buffering are introduced. (3) Heuristic dataflow with hardware resource adaptation. FlashDecoding++ heuristically optimizes dataflow using different hardware resource considering input dynamics. Due to the versatility of optimizations in FlashDecoding++, FlashDecoding++ can achieve up to 4.86x and 2.18x speedup on both NVIDIA and AMD GPUs compared to Hugging Face implementations. FlashDecoding++ also achieves an average speedup of 1.37x compared to state-of-the-art LLM inference engines on mainstream LLMs.</description><subject>Buffers</subject><subject>Computation</subject><subject>Dataflow kernels</subject><subject>Hardware</subject><subject>Inference</subject><subject>Large language models</subject><subject>Optimization</subject><subject>Synchronism</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNpjYuA0MjY21LUwMTLiYOAtLs4yMDAwMjM3MjU15mSwc8tJLM5wSU3OT8nMS9fWtlJwSywuSS1S8EksSk8FknnppYlAhm9-SmqOgmdeWmpRal5yqkJ-noJ7QGgxDwNrWmJOcSovlOZmUHZzDXH20C0oyi8sTS0uic_KLy3KA0rFG1lYmJqZm1gYWxoTpwoAAVE25g</recordid><startdate>20231110</startdate><enddate>20231110</enddate><creator>Hong, Ke</creator><creator>Dai, Guohao</creator><creator>Xu, Jiaming</creator><creator>Mao, Qiuli</creator><creator>Li, Xiuhong</creator><creator>Liu, Jun</creator><creator>Chen, Kangdi</creator><creator>Dong, Yuhan</creator><creator>Wang, Yu</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20231110</creationdate><title>FlashDecoding++: Faster Large Language Model Inference on GPUs</title><author>Hong, Ke ; Dai, Guohao ; Xu, Jiaming ; Mao, Qiuli ; Li, Xiuhong ; Liu, Jun ; Chen, Kangdi ; Dong, Yuhan ; Wang, Yu</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_28856748393</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Buffers</topic><topic>Computation</topic><topic>Dataflow kernels</topic><topic>Hardware</topic><topic>Inference</topic><topic>Large language models</topic><topic>Optimization</topic><topic>Synchronism</topic><toplevel>online_resources</toplevel><creatorcontrib>Hong, Ke</creatorcontrib><creatorcontrib>Dai, Guohao</creatorcontrib><creatorcontrib>Xu, Jiaming</creatorcontrib><creatorcontrib>Mao, Qiuli</creatorcontrib><creatorcontrib>Li, Xiuhong</creatorcontrib><creatorcontrib>Liu, Jun</creatorcontrib><creatorcontrib>Chen, Kangdi</creatorcontrib><creatorcontrib>Dong, Yuhan</creatorcontrib><creatorcontrib>Wang, Yu</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Hong, Ke</au><au>Dai, Guohao</au><au>Xu, Jiaming</au><au>Mao, Qiuli</au><au>Li, Xiuhong</au><au>Liu, Jun</au><au>Chen, Kangdi</au><au>Dong, Yuhan</au><au>Wang, Yu</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>FlashDecoding++: Faster Large Language Model Inference on GPUs</atitle><jtitle>arXiv.org</jtitle><date>2023-11-10</date><risdate>2023</risdate><eissn>2331-8422</eissn><abstract>As the Large Language Model (LLM) becomes increasingly important in various domains. However, the following challenges still remain unsolved in accelerating LLM inference: (1) Synchronized partial softmax update. The softmax operation requires a synchronized update operation among each partial softmax result, leading to ~20% overheads for the attention computation in LLMs. (2) Under-utilized computation of flat GEMM. The shape of matrices performing GEMM in LLM inference is flat, leading to under-utilized computation and &gt;50% performance loss after padding zeros in previous designs. (3) Performance loss due to static dataflow. Kernel performance in LLM depends on varied input data features, hardware configurations, etc. A single and static dataflow may lead to a 50.25% performance loss for GEMMs of different shapes in LLM inference. We present FlashDecoding++, a fast LLM inference engine supporting mainstream LLMs and hardware back-ends. To tackle the above challenges, FlashDecoding++ creatively proposes: (1) Asynchronized softmax with unified max value. FlashDecoding++ introduces a unified max value technique for different partial softmax computations to avoid synchronization. (2) Flat GEMM optimization with double buffering. FlashDecoding++ points out that flat GEMMs with different shapes face varied bottlenecks. Then, techniques like double buffering are introduced. (3) Heuristic dataflow with hardware resource adaptation. FlashDecoding++ heuristically optimizes dataflow using different hardware resource considering input dynamics. Due to the versatility of optimizations in FlashDecoding++, FlashDecoding++ can achieve up to 4.86x and 2.18x speedup on both NVIDIA and AMD GPUs compared to Hugging Face implementations. FlashDecoding++ also achieves an average speedup of 1.37x compared to state-of-the-art LLM inference engines on mainstream LLMs.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2023-11
issn 2331-8422
language eng
recordid cdi_proquest_journals_2885674839
source Free E- Journals
subjects Buffers
Computation
Dataflow kernels
Hardware
Inference
Large language models
Optimization
Synchronism
title FlashDecoding++: Faster Large Language Model Inference on GPUs
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-11T13%3A03%3A03IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=FlashDecoding++:%20Faster%20Large%20Language%20Model%20Inference%20on%20GPUs&rft.jtitle=arXiv.org&rft.au=Hong,%20Ke&rft.date=2023-11-10&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2885674839%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2885674839&rft_id=info:pmid/&rfr_iscdi=true