SAL-PIM: A Subarray-level Processing-in-Memory Architecture with LUT-based Linear Interpolation for Transformer-based Text Generation
Text generation is a compelling sub-field of natural language processing, aiming to generate human-readable text from input words. In particular, the decoder-only generative models, such as generative pre-trained transformer (GPT), are widely used for text generation, with two major computational st...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Han, Wontak Cho, Hyunjun Kim, Donghyuk Kim, Joo-Young |
description | Text generation is a compelling sub-field of natural language processing,
aiming to generate human-readable text from input words. In particular, the
decoder-only generative models, such as generative pre-trained transformer
(GPT), are widely used for text generation, with two major computational
stages: summarization and generation. Unlike the summarization stage, which can
process the input tokens in parallel, the generation stage is difficult to
accelerate due to its sequential generation of output tokens through iteration.
Moreover, each iteration requires reading a whole model with little data reuse
opportunity. Therefore, the workload of transformer-based text generation is
severely memory-bound, making the external memory bandwidth system bottleneck.
In this paper, we proposed a subarray-level processing-in-memory architecture
named SAL-PIM, HBM-based PIM architecture for the end-to-end acceleration of
transformer-based text generation. The SAL-PIM architecture includes three
architectural features. First, the SAL-PIM architecture utilizes higher
internal bandwidth by integrating multiple subarray-level arithmetic logic
units with optimized data mapping schemes. Second, the SAL-PIM architecture
adopts LUT-based linear interpolation to perform complex non-linear functions
in PIM. Third, the SAL-PIM architecture accelerates end-to-end inference on PIM
in text generation. Furthermore, to validate the SAL-PIM architecture, we built
cycle-accurate simulator and implemented the SAL-PIM's logic units in 28-nm
CMOS technology. As a result, when the input size is from 32 to 128 and the
output size is from 1 to 256, SAL-PIM achieves a maximum of 4.72 times speedup
and an average of 1.83 times speedup for the text generation based on the GPT-2
medium model compared to the server-level GPU. |
doi_str_mv | 10.48550/arxiv.2401.17005 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2401_17005</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2401_17005</sourcerecordid><originalsourceid>FETCH-LOGICAL-a675-cc885ef7f1dab8d3b006fa20216b391bb4aca15185a2723eb34a953f863e204d3</originalsourceid><addsrcrecordid>eNotkMFOg0AURdm4MNUPcOX8wOAMwwB1RxqtJDQ2Ka7JG3jYSWBoHrSWD_C_bbGrexcnNzfH856k8MNEa_ECdLYnPwiF9GUshL73fndpzrfZ5pWlbHc0QAQTb_GELdtSX-EwWPfNreMb7HqaWErV3o5YjUdC9mPHPcu_Cm5gwJrl1iEQy9yIdOhbGG3vWNMTKwjccCkd0g0t8DyyNTqkmXrw7hpoB3y85cIr3t-K1QfPP9fZ6vIQoljzqkoSjU3cyBpMUisjRNRAIAIZGbWUxoRQgdQy0RDEgUKjQlhq1SSRwkCEtVp4z_-zs4fyQLYDmsqrj3L2of4AmdxciQ</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>SAL-PIM: A Subarray-level Processing-in-Memory Architecture with LUT-based Linear Interpolation for Transformer-based Text Generation</title><source>arXiv.org</source><creator>Han, Wontak ; Cho, Hyunjun ; Kim, Donghyuk ; Kim, Joo-Young</creator><creatorcontrib>Han, Wontak ; Cho, Hyunjun ; Kim, Donghyuk ; Kim, Joo-Young</creatorcontrib><description>Text generation is a compelling sub-field of natural language processing,
aiming to generate human-readable text from input words. In particular, the
decoder-only generative models, such as generative pre-trained transformer
(GPT), are widely used for text generation, with two major computational
stages: summarization and generation. Unlike the summarization stage, which can
process the input tokens in parallel, the generation stage is difficult to
accelerate due to its sequential generation of output tokens through iteration.
Moreover, each iteration requires reading a whole model with little data reuse
opportunity. Therefore, the workload of transformer-based text generation is
severely memory-bound, making the external memory bandwidth system bottleneck.
In this paper, we proposed a subarray-level processing-in-memory architecture
named SAL-PIM, HBM-based PIM architecture for the end-to-end acceleration of
transformer-based text generation. The SAL-PIM architecture includes three
architectural features. First, the SAL-PIM architecture utilizes higher
internal bandwidth by integrating multiple subarray-level arithmetic logic
units with optimized data mapping schemes. Second, the SAL-PIM architecture
adopts LUT-based linear interpolation to perform complex non-linear functions
in PIM. Third, the SAL-PIM architecture accelerates end-to-end inference on PIM
in text generation. Furthermore, to validate the SAL-PIM architecture, we built
cycle-accurate simulator and implemented the SAL-PIM's logic units in 28-nm
CMOS technology. As a result, when the input size is from 32 to 128 and the
output size is from 1 to 256, SAL-PIM achieves a maximum of 4.72 times speedup
and an average of 1.83 times speedup for the text generation based on the GPT-2
medium model compared to the server-level GPU.</description><identifier>DOI: 10.48550/arxiv.2401.17005</identifier><language>eng</language><subject>Computer Science - Hardware Architecture</subject><creationdate>2024-01</creationdate><rights>http://creativecommons.org/licenses/by-nc-nd/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2401.17005$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2401.17005$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Han, Wontak</creatorcontrib><creatorcontrib>Cho, Hyunjun</creatorcontrib><creatorcontrib>Kim, Donghyuk</creatorcontrib><creatorcontrib>Kim, Joo-Young</creatorcontrib><title>SAL-PIM: A Subarray-level Processing-in-Memory Architecture with LUT-based Linear Interpolation for Transformer-based Text Generation</title><description>Text generation is a compelling sub-field of natural language processing,
aiming to generate human-readable text from input words. In particular, the
decoder-only generative models, such as generative pre-trained transformer
(GPT), are widely used for text generation, with two major computational
stages: summarization and generation. Unlike the summarization stage, which can
process the input tokens in parallel, the generation stage is difficult to
accelerate due to its sequential generation of output tokens through iteration.
Moreover, each iteration requires reading a whole model with little data reuse
opportunity. Therefore, the workload of transformer-based text generation is
severely memory-bound, making the external memory bandwidth system bottleneck.
In this paper, we proposed a subarray-level processing-in-memory architecture
named SAL-PIM, HBM-based PIM architecture for the end-to-end acceleration of
transformer-based text generation. The SAL-PIM architecture includes three
architectural features. First, the SAL-PIM architecture utilizes higher
internal bandwidth by integrating multiple subarray-level arithmetic logic
units with optimized data mapping schemes. Second, the SAL-PIM architecture
adopts LUT-based linear interpolation to perform complex non-linear functions
in PIM. Third, the SAL-PIM architecture accelerates end-to-end inference on PIM
in text generation. Furthermore, to validate the SAL-PIM architecture, we built
cycle-accurate simulator and implemented the SAL-PIM's logic units in 28-nm
CMOS technology. As a result, when the input size is from 32 to 128 and the
output size is from 1 to 256, SAL-PIM achieves a maximum of 4.72 times speedup
and an average of 1.83 times speedup for the text generation based on the GPT-2
medium model compared to the server-level GPU.</description><subject>Computer Science - Hardware Architecture</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotkMFOg0AURdm4MNUPcOX8wOAMwwB1RxqtJDQ2Ka7JG3jYSWBoHrSWD_C_bbGrexcnNzfH856k8MNEa_ECdLYnPwiF9GUshL73fndpzrfZ5pWlbHc0QAQTb_GELdtSX-EwWPfNreMb7HqaWErV3o5YjUdC9mPHPcu_Cm5gwJrl1iEQy9yIdOhbGG3vWNMTKwjccCkd0g0t8DyyNTqkmXrw7hpoB3y85cIr3t-K1QfPP9fZ6vIQoljzqkoSjU3cyBpMUisjRNRAIAIZGbWUxoRQgdQy0RDEgUKjQlhq1SSRwkCEtVp4z_-zs4fyQLYDmsqrj3L2of4AmdxciQ</recordid><startdate>20240130</startdate><enddate>20240130</enddate><creator>Han, Wontak</creator><creator>Cho, Hyunjun</creator><creator>Kim, Donghyuk</creator><creator>Kim, Joo-Young</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240130</creationdate><title>SAL-PIM: A Subarray-level Processing-in-Memory Architecture with LUT-based Linear Interpolation for Transformer-based Text Generation</title><author>Han, Wontak ; Cho, Hyunjun ; Kim, Donghyuk ; Kim, Joo-Young</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a675-cc885ef7f1dab8d3b006fa20216b391bb4aca15185a2723eb34a953f863e204d3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Hardware Architecture</topic><toplevel>online_resources</toplevel><creatorcontrib>Han, Wontak</creatorcontrib><creatorcontrib>Cho, Hyunjun</creatorcontrib><creatorcontrib>Kim, Donghyuk</creatorcontrib><creatorcontrib>Kim, Joo-Young</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Han, Wontak</au><au>Cho, Hyunjun</au><au>Kim, Donghyuk</au><au>Kim, Joo-Young</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>SAL-PIM: A Subarray-level Processing-in-Memory Architecture with LUT-based Linear Interpolation for Transformer-based Text Generation</atitle><date>2024-01-30</date><risdate>2024</risdate><abstract>Text generation is a compelling sub-field of natural language processing,
aiming to generate human-readable text from input words. In particular, the
decoder-only generative models, such as generative pre-trained transformer
(GPT), are widely used for text generation, with two major computational
stages: summarization and generation. Unlike the summarization stage, which can
process the input tokens in parallel, the generation stage is difficult to
accelerate due to its sequential generation of output tokens through iteration.
Moreover, each iteration requires reading a whole model with little data reuse
opportunity. Therefore, the workload of transformer-based text generation is
severely memory-bound, making the external memory bandwidth system bottleneck.
In this paper, we proposed a subarray-level processing-in-memory architecture
named SAL-PIM, HBM-based PIM architecture for the end-to-end acceleration of
transformer-based text generation. The SAL-PIM architecture includes three
architectural features. First, the SAL-PIM architecture utilizes higher
internal bandwidth by integrating multiple subarray-level arithmetic logic
units with optimized data mapping schemes. Second, the SAL-PIM architecture
adopts LUT-based linear interpolation to perform complex non-linear functions
in PIM. Third, the SAL-PIM architecture accelerates end-to-end inference on PIM
in text generation. Furthermore, to validate the SAL-PIM architecture, we built
cycle-accurate simulator and implemented the SAL-PIM's logic units in 28-nm
CMOS technology. As a result, when the input size is from 32 to 128 and the
output size is from 1 to 256, SAL-PIM achieves a maximum of 4.72 times speedup
and an average of 1.83 times speedup for the text generation based on the GPT-2
medium model compared to the server-level GPU.</abstract><doi>10.48550/arxiv.2401.17005</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2401.17005 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2401_17005 |
source | arXiv.org |
subjects | Computer Science - Hardware Architecture |
title | SAL-PIM: A Subarray-level Processing-in-Memory Architecture with LUT-based Linear Interpolation for Transformer-based Text Generation |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-20T20%3A02%3A09IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=SAL-PIM:%20A%20Subarray-level%20Processing-in-Memory%20Architecture%20with%20LUT-based%20Linear%20Interpolation%20for%20Transformer-based%20Text%20Generation&rft.au=Han,%20Wontak&rft.date=2024-01-30&rft_id=info:doi/10.48550/arxiv.2401.17005&rft_dat=%3Carxiv_GOX%3E2401_17005%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |