The Deep Learning Compiler: A Comprehensive Survey

The difficulty of deploying various deep learning (DL) models on diverse DL hardware has boosted the research and development of DL compilers in the community. Several DL compilers have been proposed from both industry and academia such as Tensorflow XLA and TVM. Similarly, the DL compilers take the...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on parallel and distributed systems 2021-03, Vol.32 (3), p.708-727
Hauptverfasser: Li, Mingzhen, Liu, Yi, Liu, Xiaoyan, Sun, Qingxiao, You, Xin, Yang, Hailong, Luan, Zhongzhi, Gan, Lin, Yang, Guangwen, Qian, Depei
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 727
container_issue 3
container_start_page 708
container_title IEEE transactions on parallel and distributed systems
container_volume 32
creator Li, Mingzhen
Liu, Yi
Liu, Xiaoyan
Sun, Qingxiao
You, Xin
Yang, Hailong
Luan, Zhongzhi
Gan, Lin
Yang, Guangwen
Qian, Depei
description The difficulty of deploying various deep learning (DL) models on diverse DL hardware has boosted the research and development of DL compilers in the community. Several DL compilers have been proposed from both industry and academia such as Tensorflow XLA and TVM. Similarly, the DL compilers take the DL models described in different DL frameworks as input, and then generate optimized codes for diverse DL hardware as output. However, none of the existing survey has analyzed the unique design architecture of the DL compilers comprehensively. In this article, we perform a comprehensive survey of existing DL compilers by dissecting the commonly adopted design in details, with emphasis on the DL oriented multi-level IRs, and frontend/backend optimizations. We present detailed analysis on the design of multi-level IRs and illustrate the commonly adopted optimization techniques. Finally, several insights are highlighted as the potential research directions of DL compiler. This is the first survey article focusing on the design architecture of DL compilers, which we hope can pave the road for future research towards DL compiler.
doi_str_mv 10.1109/TPDS.2020.3030548
format Article
fullrecord <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_proquest_journals_2458751250</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9222299</ieee_id><sourcerecordid>2458751250</sourcerecordid><originalsourceid>FETCH-LOGICAL-c359t-b4ba32e4e95021c52682a0c6ff213f3529c30460a9aa3cb571c0c25d644461963</originalsourceid><addsrcrecordid>eNo9kE1PwkAQhjdGExH9AcZLE8_F2dmd0vVGwK-ERBPwvFnWqZRAW3cpCf_eIsS5zHt43pnkEeJWwkBKMA_zj8lsgIAwUKCAdH4mepIoT1Hm6rzLoCk1KM2luIpxBSA1ge4JnC85mTA3yZRdqMrqOxnXm6Zcc3hMRn858JKrWO44mbVhx_trcVG4deSb0-6Lz-en-fg1nb6_vI1H09QrMtt0oRdOIWs2BCg9YZajA58VBUpVKELjFegMnHFO-QUNpQeP9JVprTNpMtUX98e7Tah_Wo5bu6rbUHUvLWrKhySRoKPkkfKhjjFwYZtQblzYWwn2oMYe1NiDGntS03Xujp2Smf95g90Yo34B4gFcuw</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2458751250</pqid></control><display><type>article</type><title>The Deep Learning Compiler: A Comprehensive Survey</title><source>IEEE Electronic Library (IEL)</source><creator>Li, Mingzhen ; Liu, Yi ; Liu, Xiaoyan ; Sun, Qingxiao ; You, Xin ; Yang, Hailong ; Luan, Zhongzhi ; Gan, Lin ; Yang, Guangwen ; Qian, Depei</creator><creatorcontrib>Li, Mingzhen ; Liu, Yi ; Liu, Xiaoyan ; Sun, Qingxiao ; You, Xin ; Yang, Hailong ; Luan, Zhongzhi ; Gan, Lin ; Yang, Guangwen ; Qian, Depei</creatorcontrib><description>The difficulty of deploying various deep learning (DL) models on diverse DL hardware has boosted the research and development of DL compilers in the community. Several DL compilers have been proposed from both industry and academia such as Tensorflow XLA and TVM. Similarly, the DL compilers take the DL models described in different DL frameworks as input, and then generate optimized codes for diverse DL hardware as output. However, none of the existing survey has analyzed the unique design architecture of the DL compilers comprehensively. In this article, we perform a comprehensive survey of existing DL compilers by dissecting the commonly adopted design in details, with emphasis on the DL oriented multi-level IRs, and frontend/backend optimizations. We present detailed analysis on the design of multi-level IRs and illustrate the commonly adopted optimization techniques. Finally, several insights are highlighted as the potential research directions of DL compiler. This is the first survey article focusing on the design architecture of DL compilers, which we hope can pave the road for future research towards DL compiler.</description><identifier>ISSN: 1045-9219</identifier><identifier>EISSN: 1558-2183</identifier><identifier>DOI: 10.1109/TPDS.2020.3030548</identifier><identifier>CODEN: ITDSEO</identifier><language>eng</language><publisher>New York: IEEE</publisher><subject>compiler ; Compilers ; Computational modeling ; Computer architecture ; Deep learning ; Design analysis ; Hardware ; Integrated circuit modeling ; intermediate representation ; Libraries ; Neural networks ; Optimization ; Optimization techniques ; R&amp;D ; Research &amp; development</subject><ispartof>IEEE transactions on parallel and distributed systems, 2021-03, Vol.32 (3), p.708-727</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2021</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c359t-b4ba32e4e95021c52682a0c6ff213f3529c30460a9aa3cb571c0c25d644461963</citedby><cites>FETCH-LOGICAL-c359t-b4ba32e4e95021c52682a0c6ff213f3529c30460a9aa3cb571c0c25d644461963</cites><orcidid>0000-0003-1829-2817 ; 0000-0002-5382-1473 ; 0000-0002-4115-9072 ; 0000-0003-1101-7927 ; 0000-0002-7186-0556</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9222299$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,796,27923,27924,54757</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/9222299$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Li, Mingzhen</creatorcontrib><creatorcontrib>Liu, Yi</creatorcontrib><creatorcontrib>Liu, Xiaoyan</creatorcontrib><creatorcontrib>Sun, Qingxiao</creatorcontrib><creatorcontrib>You, Xin</creatorcontrib><creatorcontrib>Yang, Hailong</creatorcontrib><creatorcontrib>Luan, Zhongzhi</creatorcontrib><creatorcontrib>Gan, Lin</creatorcontrib><creatorcontrib>Yang, Guangwen</creatorcontrib><creatorcontrib>Qian, Depei</creatorcontrib><title>The Deep Learning Compiler: A Comprehensive Survey</title><title>IEEE transactions on parallel and distributed systems</title><addtitle>TPDS</addtitle><description>The difficulty of deploying various deep learning (DL) models on diverse DL hardware has boosted the research and development of DL compilers in the community. Several DL compilers have been proposed from both industry and academia such as Tensorflow XLA and TVM. Similarly, the DL compilers take the DL models described in different DL frameworks as input, and then generate optimized codes for diverse DL hardware as output. However, none of the existing survey has analyzed the unique design architecture of the DL compilers comprehensively. In this article, we perform a comprehensive survey of existing DL compilers by dissecting the commonly adopted design in details, with emphasis on the DL oriented multi-level IRs, and frontend/backend optimizations. We present detailed analysis on the design of multi-level IRs and illustrate the commonly adopted optimization techniques. Finally, several insights are highlighted as the potential research directions of DL compiler. This is the first survey article focusing on the design architecture of DL compilers, which we hope can pave the road for future research towards DL compiler.</description><subject>compiler</subject><subject>Compilers</subject><subject>Computational modeling</subject><subject>Computer architecture</subject><subject>Deep learning</subject><subject>Design analysis</subject><subject>Hardware</subject><subject>Integrated circuit modeling</subject><subject>intermediate representation</subject><subject>Libraries</subject><subject>Neural networks</subject><subject>Optimization</subject><subject>Optimization techniques</subject><subject>R&amp;D</subject><subject>Research &amp; development</subject><issn>1045-9219</issn><issn>1558-2183</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNo9kE1PwkAQhjdGExH9AcZLE8_F2dmd0vVGwK-ERBPwvFnWqZRAW3cpCf_eIsS5zHt43pnkEeJWwkBKMA_zj8lsgIAwUKCAdH4mepIoT1Hm6rzLoCk1KM2luIpxBSA1ge4JnC85mTA3yZRdqMrqOxnXm6Zcc3hMRn858JKrWO44mbVhx_trcVG4deSb0-6Lz-en-fg1nb6_vI1H09QrMtt0oRdOIWs2BCg9YZajA58VBUpVKELjFegMnHFO-QUNpQeP9JVprTNpMtUX98e7Tah_Wo5bu6rbUHUvLWrKhySRoKPkkfKhjjFwYZtQblzYWwn2oMYe1NiDGntS03Xujp2Smf95g90Yo34B4gFcuw</recordid><startdate>20210301</startdate><enddate>20210301</enddate><creator>Li, Mingzhen</creator><creator>Liu, Yi</creator><creator>Liu, Xiaoyan</creator><creator>Sun, Qingxiao</creator><creator>You, Xin</creator><creator>Yang, Hailong</creator><creator>Luan, Zhongzhi</creator><creator>Gan, Lin</creator><creator>Yang, Guangwen</creator><creator>Qian, Depei</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><orcidid>https://orcid.org/0000-0003-1829-2817</orcidid><orcidid>https://orcid.org/0000-0002-5382-1473</orcidid><orcidid>https://orcid.org/0000-0002-4115-9072</orcidid><orcidid>https://orcid.org/0000-0003-1101-7927</orcidid><orcidid>https://orcid.org/0000-0002-7186-0556</orcidid></search><sort><creationdate>20210301</creationdate><title>The Deep Learning Compiler: A Comprehensive Survey</title><author>Li, Mingzhen ; Liu, Yi ; Liu, Xiaoyan ; Sun, Qingxiao ; You, Xin ; Yang, Hailong ; Luan, Zhongzhi ; Gan, Lin ; Yang, Guangwen ; Qian, Depei</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c359t-b4ba32e4e95021c52682a0c6ff213f3529c30460a9aa3cb571c0c25d644461963</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>compiler</topic><topic>Compilers</topic><topic>Computational modeling</topic><topic>Computer architecture</topic><topic>Deep learning</topic><topic>Design analysis</topic><topic>Hardware</topic><topic>Integrated circuit modeling</topic><topic>intermediate representation</topic><topic>Libraries</topic><topic>Neural networks</topic><topic>Optimization</topic><topic>Optimization techniques</topic><topic>R&amp;D</topic><topic>Research &amp; development</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Li, Mingzhen</creatorcontrib><creatorcontrib>Liu, Yi</creatorcontrib><creatorcontrib>Liu, Xiaoyan</creatorcontrib><creatorcontrib>Sun, Qingxiao</creatorcontrib><creatorcontrib>You, Xin</creatorcontrib><creatorcontrib>Yang, Hailong</creatorcontrib><creatorcontrib>Luan, Zhongzhi</creatorcontrib><creatorcontrib>Gan, Lin</creatorcontrib><creatorcontrib>Yang, Guangwen</creatorcontrib><creatorcontrib>Qian, Depei</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>IEEE transactions on parallel and distributed systems</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Li, Mingzhen</au><au>Liu, Yi</au><au>Liu, Xiaoyan</au><au>Sun, Qingxiao</au><au>You, Xin</au><au>Yang, Hailong</au><au>Luan, Zhongzhi</au><au>Gan, Lin</au><au>Yang, Guangwen</au><au>Qian, Depei</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>The Deep Learning Compiler: A Comprehensive Survey</atitle><jtitle>IEEE transactions on parallel and distributed systems</jtitle><stitle>TPDS</stitle><date>2021-03-01</date><risdate>2021</risdate><volume>32</volume><issue>3</issue><spage>708</spage><epage>727</epage><pages>708-727</pages><issn>1045-9219</issn><eissn>1558-2183</eissn><coden>ITDSEO</coden><abstract>The difficulty of deploying various deep learning (DL) models on diverse DL hardware has boosted the research and development of DL compilers in the community. Several DL compilers have been proposed from both industry and academia such as Tensorflow XLA and TVM. Similarly, the DL compilers take the DL models described in different DL frameworks as input, and then generate optimized codes for diverse DL hardware as output. However, none of the existing survey has analyzed the unique design architecture of the DL compilers comprehensively. In this article, we perform a comprehensive survey of existing DL compilers by dissecting the commonly adopted design in details, with emphasis on the DL oriented multi-level IRs, and frontend/backend optimizations. We present detailed analysis on the design of multi-level IRs and illustrate the commonly adopted optimization techniques. Finally, several insights are highlighted as the potential research directions of DL compiler. This is the first survey article focusing on the design architecture of DL compilers, which we hope can pave the road for future research towards DL compiler.</abstract><cop>New York</cop><pub>IEEE</pub><doi>10.1109/TPDS.2020.3030548</doi><tpages>20</tpages><orcidid>https://orcid.org/0000-0003-1829-2817</orcidid><orcidid>https://orcid.org/0000-0002-5382-1473</orcidid><orcidid>https://orcid.org/0000-0002-4115-9072</orcidid><orcidid>https://orcid.org/0000-0003-1101-7927</orcidid><orcidid>https://orcid.org/0000-0002-7186-0556</orcidid></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 1045-9219
ispartof IEEE transactions on parallel and distributed systems, 2021-03, Vol.32 (3), p.708-727
issn 1045-9219
1558-2183
language eng
recordid cdi_proquest_journals_2458751250
source IEEE Electronic Library (IEL)
subjects compiler
Compilers
Computational modeling
Computer architecture
Deep learning
Design analysis
Hardware
Integrated circuit modeling
intermediate representation
Libraries
Neural networks
Optimization
Optimization techniques
R&D
Research & development
title The Deep Learning Compiler: A Comprehensive Survey
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-13T00%3A28%3A57IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=The%20Deep%20Learning%20Compiler:%20A%20Comprehensive%20Survey&rft.jtitle=IEEE%20transactions%20on%20parallel%20and%20distributed%20systems&rft.au=Li,%20Mingzhen&rft.date=2021-03-01&rft.volume=32&rft.issue=3&rft.spage=708&rft.epage=727&rft.pages=708-727&rft.issn=1045-9219&rft.eissn=1558-2183&rft.coden=ITDSEO&rft_id=info:doi/10.1109/TPDS.2020.3030548&rft_dat=%3Cproquest_RIE%3E2458751250%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2458751250&rft_id=info:pmid/&rft_ieee_id=9222299&rfr_iscdi=true