Learning Accurate Integer Transformer Machine-Translation Models

We describe a method for training accurate Transformer machine-translation models to run inference using 8-bit integer (INT8) hardware matrix multipliers, as opposed to the more costly single-precision floating-point (FP32) hardware. Unlike previous work, which converted only 85 Transformer matrix m...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:SN computer science 2021-07, Vol.2 (4), p.291, Article 291
1. Verfasser: Wu, Ephrem
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue 4
container_start_page 291
container_title SN computer science
container_volume 2
creator Wu, Ephrem
description We describe a method for training accurate Transformer machine-translation models to run inference using 8-bit integer (INT8) hardware matrix multipliers, as opposed to the more costly single-precision floating-point (FP32) hardware. Unlike previous work, which converted only 85 Transformer matrix multiplications to INT8, leaving 48 out of 133 of them in FP32 because of unacceptable accuracy loss, we convert them all to INT8 without compromising accuracy. Tested on the newstest2014 English-to-German translation task, our INT8 Transformer Base and Transformer Big models yield BLEU scores that are 99.3–100% relative to those of the corresponding FP32 models. Our approach converts all matrix-multiplication tensors from an existing FP32 model into INT8 tensors by automatically making range-precision trade-offs during training. To demonstrate the robustness of this approach, we also include results from INT6 Transformer models.
doi_str_mv 10.1007/s42979-021-00688-4
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2932788319</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2932788319</sourcerecordid><originalsourceid>FETCH-LOGICAL-c2344-bf21287e20e957e4d8381bcc39aee3c5213624d17c74b0acfb0106b119132d0d3</originalsourceid><addsrcrecordid>eNp9kE9LAzEQxYMoWLRfwNOC5-hkkt0kN0vxT6HFSwVvIZvN1i1ttibbg9_e2BW8eZrH8N4b5kfIDYM7BiDvk0AtNQVkFKBSioozMsGqYlRpkOcnjVTr8v2STFPaAgCWIERVTsjD0tsYurApZs4dox18sQiD3_hYrKMNqe3jPuuVdR9d8PS029mh60Ox6hu_S9fkorW75Ke_84q8PT2u5y90-fq8mM-W1CEXgtYtMlTSI3hdSi8axRWrnePaes9diYxXKBomnRQ1WNfWwKCqGdOMYwMNvyK3Y-8h9p9Hnwaz7Y8x5JMGNUepFGc6u3B0udinFH1rDrHb2_hlGJgfWGaEZTIsc4JlRA7xMZSyOeTX_6r_SX0DvexrgA</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2932788319</pqid></control><display><type>article</type><title>Learning Accurate Integer Transformer Machine-Translation Models</title><source>ProQuest Central UK/Ireland</source><source>SpringerLink Journals - AutoHoldings</source><source>ProQuest Central</source><creator>Wu, Ephrem</creator><creatorcontrib>Wu, Ephrem</creatorcontrib><description>We describe a method for training accurate Transformer machine-translation models to run inference using 8-bit integer (INT8) hardware matrix multipliers, as opposed to the more costly single-precision floating-point (FP32) hardware. Unlike previous work, which converted only 85 Transformer matrix multiplications to INT8, leaving 48 out of 133 of them in FP32 because of unacceptable accuracy loss, we convert them all to INT8 without compromising accuracy. Tested on the newstest2014 English-to-German translation task, our INT8 Transformer Base and Transformer Big models yield BLEU scores that are 99.3–100% relative to those of the corresponding FP32 models. Our approach converts all matrix-multiplication tensors from an existing FP32 model into INT8 tensors by automatically making range-precision trade-offs during training. To demonstrate the robustness of this approach, we also include results from INT6 Transformer models.</description><identifier>ISSN: 2662-995X</identifier><identifier>EISSN: 2661-8907</identifier><identifier>DOI: 10.1007/s42979-021-00688-4</identifier><language>eng</language><publisher>Singapore: Springer Singapore</publisher><subject>Accuracy ; Computer Imaging ; Computer Science ; Computer Systems Organization and Communication Networks ; Data Structures and Information Theory ; Floating point arithmetic ; Hardware ; Information Systems and Communication Service ; Integers ; Machine translation ; Mathematical analysis ; Numbers ; Original Research ; Pattern Recognition and Graphics ; Software Engineering/Programming and Operating Systems ; Tensors ; Training ; Transformers ; Vision</subject><ispartof>SN computer science, 2021-07, Vol.2 (4), p.291, Article 291</ispartof><rights>The Author(s), under exclusive licence to Springer Nature Singapore Pte Ltd 2021</rights><rights>The Author(s), under exclusive licence to Springer Nature Singapore Pte Ltd 2021.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c2344-bf21287e20e957e4d8381bcc39aee3c5213624d17c74b0acfb0106b119132d0d3</citedby><cites>FETCH-LOGICAL-c2344-bf21287e20e957e4d8381bcc39aee3c5213624d17c74b0acfb0106b119132d0d3</cites><orcidid>0000-0002-9131-2813</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s42979-021-00688-4$$EPDF$$P50$$Gspringer$$H</linktopdf><linktohtml>$$Uhttps://www.proquest.com/docview/2932788319?pq-origsite=primo$$EHTML$$P50$$Gproquest$$H</linktohtml><link.rule.ids>314,780,784,21387,27923,27924,33743,41487,42556,43804,51318,64384,64388,72340</link.rule.ids></links><search><creatorcontrib>Wu, Ephrem</creatorcontrib><title>Learning Accurate Integer Transformer Machine-Translation Models</title><title>SN computer science</title><addtitle>SN COMPUT. SCI</addtitle><description>We describe a method for training accurate Transformer machine-translation models to run inference using 8-bit integer (INT8) hardware matrix multipliers, as opposed to the more costly single-precision floating-point (FP32) hardware. Unlike previous work, which converted only 85 Transformer matrix multiplications to INT8, leaving 48 out of 133 of them in FP32 because of unacceptable accuracy loss, we convert them all to INT8 without compromising accuracy. Tested on the newstest2014 English-to-German translation task, our INT8 Transformer Base and Transformer Big models yield BLEU scores that are 99.3–100% relative to those of the corresponding FP32 models. Our approach converts all matrix-multiplication tensors from an existing FP32 model into INT8 tensors by automatically making range-precision trade-offs during training. To demonstrate the robustness of this approach, we also include results from INT6 Transformer models.</description><subject>Accuracy</subject><subject>Computer Imaging</subject><subject>Computer Science</subject><subject>Computer Systems Organization and Communication Networks</subject><subject>Data Structures and Information Theory</subject><subject>Floating point arithmetic</subject><subject>Hardware</subject><subject>Information Systems and Communication Service</subject><subject>Integers</subject><subject>Machine translation</subject><subject>Mathematical analysis</subject><subject>Numbers</subject><subject>Original Research</subject><subject>Pattern Recognition and Graphics</subject><subject>Software Engineering/Programming and Operating Systems</subject><subject>Tensors</subject><subject>Training</subject><subject>Transformers</subject><subject>Vision</subject><issn>2662-995X</issn><issn>2661-8907</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GNUQQ</sourceid><recordid>eNp9kE9LAzEQxYMoWLRfwNOC5-hkkt0kN0vxT6HFSwVvIZvN1i1ttibbg9_e2BW8eZrH8N4b5kfIDYM7BiDvk0AtNQVkFKBSioozMsGqYlRpkOcnjVTr8v2STFPaAgCWIERVTsjD0tsYurApZs4dox18sQiD3_hYrKMNqe3jPuuVdR9d8PS029mh60Ox6hu_S9fkorW75Ke_84q8PT2u5y90-fq8mM-W1CEXgtYtMlTSI3hdSi8axRWrnePaes9diYxXKBomnRQ1WNfWwKCqGdOMYwMNvyK3Y-8h9p9Hnwaz7Y8x5JMGNUepFGc6u3B0udinFH1rDrHb2_hlGJgfWGaEZTIsc4JlRA7xMZSyOeTX_6r_SX0DvexrgA</recordid><startdate>20210701</startdate><enddate>20210701</enddate><creator>Wu, Ephrem</creator><general>Springer Singapore</general><general>Springer Nature B.V</general><scope>AAYXX</scope><scope>CITATION</scope><scope>8FE</scope><scope>8FG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>GNUQQ</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K7-</scope><scope>P5Z</scope><scope>P62</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><orcidid>https://orcid.org/0000-0002-9131-2813</orcidid></search><sort><creationdate>20210701</creationdate><title>Learning Accurate Integer Transformer Machine-Translation Models</title><author>Wu, Ephrem</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c2344-bf21287e20e957e4d8381bcc39aee3c5213624d17c74b0acfb0106b119132d0d3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Accuracy</topic><topic>Computer Imaging</topic><topic>Computer Science</topic><topic>Computer Systems Organization and Communication Networks</topic><topic>Data Structures and Information Theory</topic><topic>Floating point arithmetic</topic><topic>Hardware</topic><topic>Information Systems and Communication Service</topic><topic>Integers</topic><topic>Machine translation</topic><topic>Mathematical analysis</topic><topic>Numbers</topic><topic>Original Research</topic><topic>Pattern Recognition and Graphics</topic><topic>Software Engineering/Programming and Operating Systems</topic><topic>Tensors</topic><topic>Training</topic><topic>Transformers</topic><topic>Vision</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Wu, Ephrem</creatorcontrib><collection>CrossRef</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>ProQuest Central Student</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Computer Science Collection</collection><collection>Computer Science Database</collection><collection>Advanced Technologies &amp; Aerospace Database</collection><collection>ProQuest Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><jtitle>SN computer science</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Wu, Ephrem</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Learning Accurate Integer Transformer Machine-Translation Models</atitle><jtitle>SN computer science</jtitle><stitle>SN COMPUT. SCI</stitle><date>2021-07-01</date><risdate>2021</risdate><volume>2</volume><issue>4</issue><spage>291</spage><pages>291-</pages><artnum>291</artnum><issn>2662-995X</issn><eissn>2661-8907</eissn><abstract>We describe a method for training accurate Transformer machine-translation models to run inference using 8-bit integer (INT8) hardware matrix multipliers, as opposed to the more costly single-precision floating-point (FP32) hardware. Unlike previous work, which converted only 85 Transformer matrix multiplications to INT8, leaving 48 out of 133 of them in FP32 because of unacceptable accuracy loss, we convert them all to INT8 without compromising accuracy. Tested on the newstest2014 English-to-German translation task, our INT8 Transformer Base and Transformer Big models yield BLEU scores that are 99.3–100% relative to those of the corresponding FP32 models. Our approach converts all matrix-multiplication tensors from an existing FP32 model into INT8 tensors by automatically making range-precision trade-offs during training. To demonstrate the robustness of this approach, we also include results from INT6 Transformer models.</abstract><cop>Singapore</cop><pub>Springer Singapore</pub><doi>10.1007/s42979-021-00688-4</doi><orcidid>https://orcid.org/0000-0002-9131-2813</orcidid></addata></record>
fulltext fulltext
identifier ISSN: 2662-995X
ispartof SN computer science, 2021-07, Vol.2 (4), p.291, Article 291
issn 2662-995X
2661-8907
language eng
recordid cdi_proquest_journals_2932788319
source ProQuest Central UK/Ireland; SpringerLink Journals - AutoHoldings; ProQuest Central
subjects Accuracy
Computer Imaging
Computer Science
Computer Systems Organization and Communication Networks
Data Structures and Information Theory
Floating point arithmetic
Hardware
Information Systems and Communication Service
Integers
Machine translation
Mathematical analysis
Numbers
Original Research
Pattern Recognition and Graphics
Software Engineering/Programming and Operating Systems
Tensors
Training
Transformers
Vision
title Learning Accurate Integer Transformer Machine-Translation Models
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-08T08%3A39%3A46IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Learning%20Accurate%20Integer%20Transformer%20Machine-Translation%20Models&rft.jtitle=SN%20computer%20science&rft.au=Wu,%20Ephrem&rft.date=2021-07-01&rft.volume=2&rft.issue=4&rft.spage=291&rft.pages=291-&rft.artnum=291&rft.issn=2662-995X&rft.eissn=2661-8907&rft_id=info:doi/10.1007/s42979-021-00688-4&rft_dat=%3Cproquest_cross%3E2932788319%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2932788319&rft_id=info:pmid/&rfr_iscdi=true