Intel nGraph: An Intermediate Representation, Compiler, and Executor for Deep Learning

The Deep Learning (DL) community sees many novel topologies published each year. Achieving high performance on each new topology remains challenging, as each requires some level of manual effort. This issue is compounded by the proliferation of frameworks and hardware platforms. The current approach...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2018-01
Hauptverfasser: Cyphers, Scott, Bansal, Arjun K, Bhiwandiwalla, Anahita, Bobba, Jayaram, Brookhart, Matthew, Chakraborty, Avijit, Constable, Will, Convey, Christian, Cook, Leona, Kanawi, Omar, Kimball, Robert, Knight, Jason, Korovaiko, Nikolay, Kumar, Varun, Lao, Yixing, Lishka, Christopher R, Menon, Jaikrishnan, Myers, Jennifer, Sandeep Aswath Narayana, Procter, Adam, Webb, Tristan J
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Cyphers, Scott
Bansal, Arjun K
Bhiwandiwalla, Anahita
Bobba, Jayaram
Brookhart, Matthew
Chakraborty, Avijit
Constable, Will
Convey, Christian
Cook, Leona
Kanawi, Omar
Kimball, Robert
Knight, Jason
Korovaiko, Nikolay
Kumar, Varun
Lao, Yixing
Lishka, Christopher R
Menon, Jaikrishnan
Myers, Jennifer
Sandeep Aswath Narayana
Procter, Adam
Webb, Tristan J
description The Deep Learning (DL) community sees many novel topologies published each year. Achieving high performance on each new topology remains challenging, as each requires some level of manual effort. This issue is compounded by the proliferation of frameworks and hardware platforms. The current approach, which we call "direct optimization", requires deep changes within each framework to improve the training performance for each hardware backend (CPUs, GPUs, FPGAs, ASICs) and requires \(\mathcal{O}(fp)\) effort; where \(f\) is the number of frameworks and \(p\) is the number of platforms. While optimized kernels for deep-learning primitives are provided via libraries like Intel Math Kernel Library for Deep Neural Networks (MKL-DNN), there are several compiler-inspired ways in which performance can be further optimized. Building on our experience creating neon (a fast deep learning library on GPUs), we developed Intel nGraph, a soon to be open-sourced C++ library to simplify the realization of optimized deep learning performance across frameworks and hardware platforms. Initially-supported frameworks include TensorFlow, MXNet, and Intel neon framework. Initial backends are Intel Architecture CPUs (CPU), the Intel(R) Nervana Neural Network Processor(R) (NNP), and NVIDIA GPUs. Currently supported compiler optimizations include efficient memory management and data layout abstraction. In this paper, we describe our overall architecture and its core components. In the future, we envision extending nGraph API support to a wider range of frameworks, hardware (including FPGAs and ASICs), and compiler optimizations (training versus inference optimizations, multi-node and multi-device scaling via efficient sub-graph partitioning, and HW-specific compounding of operations).
format Article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2071286873</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2071286873</sourcerecordid><originalsourceid>FETCH-proquest_journals_20712868733</originalsourceid><addsrcrecordid>eNqNis0KgkAURocgSMp3uNBWwWbyh3Zh9gOtItrKkNdS9M40M0KPn0EP0OLjwDnfhHlciFWYrTmfMd_aNooinqQ8joXHbidy2AEdjNTPDWwJvsL0WDXSIVxQG7RITrpGUQC56nXToQlAUgXFG--DUwbqcTtEDWeUhhp6LNi0lp1F_8c5W-6La34MtVGvAa0rWzUYGlPJo3TFsyRLhfjv9QGgVUBw</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2071286873</pqid></control><display><type>article</type><title>Intel nGraph: An Intermediate Representation, Compiler, and Executor for Deep Learning</title><source>Free E- Journals</source><creator>Cyphers, Scott ; Bansal, Arjun K ; Bhiwandiwalla, Anahita ; Bobba, Jayaram ; Brookhart, Matthew ; Chakraborty, Avijit ; Constable, Will ; Convey, Christian ; Cook, Leona ; Kanawi, Omar ; Kimball, Robert ; Knight, Jason ; Korovaiko, Nikolay ; Kumar, Varun ; Lao, Yixing ; Lishka, Christopher R ; Menon, Jaikrishnan ; Myers, Jennifer ; Sandeep Aswath Narayana ; Procter, Adam ; Webb, Tristan J</creator><creatorcontrib>Cyphers, Scott ; Bansal, Arjun K ; Bhiwandiwalla, Anahita ; Bobba, Jayaram ; Brookhart, Matthew ; Chakraborty, Avijit ; Constable, Will ; Convey, Christian ; Cook, Leona ; Kanawi, Omar ; Kimball, Robert ; Knight, Jason ; Korovaiko, Nikolay ; Kumar, Varun ; Lao, Yixing ; Lishka, Christopher R ; Menon, Jaikrishnan ; Myers, Jennifer ; Sandeep Aswath Narayana ; Procter, Adam ; Webb, Tristan J</creatorcontrib><description>The Deep Learning (DL) community sees many novel topologies published each year. Achieving high performance on each new topology remains challenging, as each requires some level of manual effort. This issue is compounded by the proliferation of frameworks and hardware platforms. The current approach, which we call "direct optimization", requires deep changes within each framework to improve the training performance for each hardware backend (CPUs, GPUs, FPGAs, ASICs) and requires \(\mathcal{O}(fp)\) effort; where \(f\) is the number of frameworks and \(p\) is the number of platforms. While optimized kernels for deep-learning primitives are provided via libraries like Intel Math Kernel Library for Deep Neural Networks (MKL-DNN), there are several compiler-inspired ways in which performance can be further optimized. Building on our experience creating neon (a fast deep learning library on GPUs), we developed Intel nGraph, a soon to be open-sourced C++ library to simplify the realization of optimized deep learning performance across frameworks and hardware platforms. Initially-supported frameworks include TensorFlow, MXNet, and Intel neon framework. Initial backends are Intel Architecture CPUs (CPU), the Intel(R) Nervana Neural Network Processor(R) (NNP), and NVIDIA GPUs. Currently supported compiler optimizations include efficient memory management and data layout abstraction. In this paper, we describe our overall architecture and its core components. In the future, we envision extending nGraph API support to a wider range of frameworks, hardware (including FPGAs and ASICs), and compiler optimizations (training versus inference optimizations, multi-node and multi-device scaling via efficient sub-graph partitioning, and HW-specific compounding of operations).</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Central processing units ; Computer architecture ; CPUs ; Deep learning ; Field programmable gate arrays ; Graphics processing units ; Hardware ; Kernels ; Libraries ; Memory management ; Microprocessors ; Neon ; Neural networks ; Optimization ; Platforms ; Topology ; Training</subject><ispartof>arXiv.org, 2018-01</ispartof><rights>2018. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>780,784</link.rule.ids></links><search><creatorcontrib>Cyphers, Scott</creatorcontrib><creatorcontrib>Bansal, Arjun K</creatorcontrib><creatorcontrib>Bhiwandiwalla, Anahita</creatorcontrib><creatorcontrib>Bobba, Jayaram</creatorcontrib><creatorcontrib>Brookhart, Matthew</creatorcontrib><creatorcontrib>Chakraborty, Avijit</creatorcontrib><creatorcontrib>Constable, Will</creatorcontrib><creatorcontrib>Convey, Christian</creatorcontrib><creatorcontrib>Cook, Leona</creatorcontrib><creatorcontrib>Kanawi, Omar</creatorcontrib><creatorcontrib>Kimball, Robert</creatorcontrib><creatorcontrib>Knight, Jason</creatorcontrib><creatorcontrib>Korovaiko, Nikolay</creatorcontrib><creatorcontrib>Kumar, Varun</creatorcontrib><creatorcontrib>Lao, Yixing</creatorcontrib><creatorcontrib>Lishka, Christopher R</creatorcontrib><creatorcontrib>Menon, Jaikrishnan</creatorcontrib><creatorcontrib>Myers, Jennifer</creatorcontrib><creatorcontrib>Sandeep Aswath Narayana</creatorcontrib><creatorcontrib>Procter, Adam</creatorcontrib><creatorcontrib>Webb, Tristan J</creatorcontrib><title>Intel nGraph: An Intermediate Representation, Compiler, and Executor for Deep Learning</title><title>arXiv.org</title><description>The Deep Learning (DL) community sees many novel topologies published each year. Achieving high performance on each new topology remains challenging, as each requires some level of manual effort. This issue is compounded by the proliferation of frameworks and hardware platforms. The current approach, which we call "direct optimization", requires deep changes within each framework to improve the training performance for each hardware backend (CPUs, GPUs, FPGAs, ASICs) and requires \(\mathcal{O}(fp)\) effort; where \(f\) is the number of frameworks and \(p\) is the number of platforms. While optimized kernels for deep-learning primitives are provided via libraries like Intel Math Kernel Library for Deep Neural Networks (MKL-DNN), there are several compiler-inspired ways in which performance can be further optimized. Building on our experience creating neon (a fast deep learning library on GPUs), we developed Intel nGraph, a soon to be open-sourced C++ library to simplify the realization of optimized deep learning performance across frameworks and hardware platforms. Initially-supported frameworks include TensorFlow, MXNet, and Intel neon framework. Initial backends are Intel Architecture CPUs (CPU), the Intel(R) Nervana Neural Network Processor(R) (NNP), and NVIDIA GPUs. Currently supported compiler optimizations include efficient memory management and data layout abstraction. In this paper, we describe our overall architecture and its core components. In the future, we envision extending nGraph API support to a wider range of frameworks, hardware (including FPGAs and ASICs), and compiler optimizations (training versus inference optimizations, multi-node and multi-device scaling via efficient sub-graph partitioning, and HW-specific compounding of operations).</description><subject>Central processing units</subject><subject>Computer architecture</subject><subject>CPUs</subject><subject>Deep learning</subject><subject>Field programmable gate arrays</subject><subject>Graphics processing units</subject><subject>Hardware</subject><subject>Kernels</subject><subject>Libraries</subject><subject>Memory management</subject><subject>Microprocessors</subject><subject>Neon</subject><subject>Neural networks</subject><subject>Optimization</subject><subject>Platforms</subject><subject>Topology</subject><subject>Training</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2018</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNqNis0KgkAURocgSMp3uNBWwWbyh3Zh9gOtItrKkNdS9M40M0KPn0EP0OLjwDnfhHlciFWYrTmfMd_aNooinqQ8joXHbidy2AEdjNTPDWwJvsL0WDXSIVxQG7RITrpGUQC56nXToQlAUgXFG--DUwbqcTtEDWeUhhp6LNi0lp1F_8c5W-6La34MtVGvAa0rWzUYGlPJo3TFsyRLhfjv9QGgVUBw</recordid><startdate>20180130</startdate><enddate>20180130</enddate><creator>Cyphers, Scott</creator><creator>Bansal, Arjun K</creator><creator>Bhiwandiwalla, Anahita</creator><creator>Bobba, Jayaram</creator><creator>Brookhart, Matthew</creator><creator>Chakraborty, Avijit</creator><creator>Constable, Will</creator><creator>Convey, Christian</creator><creator>Cook, Leona</creator><creator>Kanawi, Omar</creator><creator>Kimball, Robert</creator><creator>Knight, Jason</creator><creator>Korovaiko, Nikolay</creator><creator>Kumar, Varun</creator><creator>Lao, Yixing</creator><creator>Lishka, Christopher R</creator><creator>Menon, Jaikrishnan</creator><creator>Myers, Jennifer</creator><creator>Sandeep Aswath Narayana</creator><creator>Procter, Adam</creator><creator>Webb, Tristan J</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20180130</creationdate><title>Intel nGraph: An Intermediate Representation, Compiler, and Executor for Deep Learning</title><author>Cyphers, Scott ; Bansal, Arjun K ; Bhiwandiwalla, Anahita ; Bobba, Jayaram ; Brookhart, Matthew ; Chakraborty, Avijit ; Constable, Will ; Convey, Christian ; Cook, Leona ; Kanawi, Omar ; Kimball, Robert ; Knight, Jason ; Korovaiko, Nikolay ; Kumar, Varun ; Lao, Yixing ; Lishka, Christopher R ; Menon, Jaikrishnan ; Myers, Jennifer ; Sandeep Aswath Narayana ; Procter, Adam ; Webb, Tristan J</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_20712868733</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2018</creationdate><topic>Central processing units</topic><topic>Computer architecture</topic><topic>CPUs</topic><topic>Deep learning</topic><topic>Field programmable gate arrays</topic><topic>Graphics processing units</topic><topic>Hardware</topic><topic>Kernels</topic><topic>Libraries</topic><topic>Memory management</topic><topic>Microprocessors</topic><topic>Neon</topic><topic>Neural networks</topic><topic>Optimization</topic><topic>Platforms</topic><topic>Topology</topic><topic>Training</topic><toplevel>online_resources</toplevel><creatorcontrib>Cyphers, Scott</creatorcontrib><creatorcontrib>Bansal, Arjun K</creatorcontrib><creatorcontrib>Bhiwandiwalla, Anahita</creatorcontrib><creatorcontrib>Bobba, Jayaram</creatorcontrib><creatorcontrib>Brookhart, Matthew</creatorcontrib><creatorcontrib>Chakraborty, Avijit</creatorcontrib><creatorcontrib>Constable, Will</creatorcontrib><creatorcontrib>Convey, Christian</creatorcontrib><creatorcontrib>Cook, Leona</creatorcontrib><creatorcontrib>Kanawi, Omar</creatorcontrib><creatorcontrib>Kimball, Robert</creatorcontrib><creatorcontrib>Knight, Jason</creatorcontrib><creatorcontrib>Korovaiko, Nikolay</creatorcontrib><creatorcontrib>Kumar, Varun</creatorcontrib><creatorcontrib>Lao, Yixing</creatorcontrib><creatorcontrib>Lishka, Christopher R</creatorcontrib><creatorcontrib>Menon, Jaikrishnan</creatorcontrib><creatorcontrib>Myers, Jennifer</creatorcontrib><creatorcontrib>Sandeep Aswath Narayana</creatorcontrib><creatorcontrib>Procter, Adam</creatorcontrib><creatorcontrib>Webb, Tristan J</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Cyphers, Scott</au><au>Bansal, Arjun K</au><au>Bhiwandiwalla, Anahita</au><au>Bobba, Jayaram</au><au>Brookhart, Matthew</au><au>Chakraborty, Avijit</au><au>Constable, Will</au><au>Convey, Christian</au><au>Cook, Leona</au><au>Kanawi, Omar</au><au>Kimball, Robert</au><au>Knight, Jason</au><au>Korovaiko, Nikolay</au><au>Kumar, Varun</au><au>Lao, Yixing</au><au>Lishka, Christopher R</au><au>Menon, Jaikrishnan</au><au>Myers, Jennifer</au><au>Sandeep Aswath Narayana</au><au>Procter, Adam</au><au>Webb, Tristan J</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Intel nGraph: An Intermediate Representation, Compiler, and Executor for Deep Learning</atitle><jtitle>arXiv.org</jtitle><date>2018-01-30</date><risdate>2018</risdate><eissn>2331-8422</eissn><abstract>The Deep Learning (DL) community sees many novel topologies published each year. Achieving high performance on each new topology remains challenging, as each requires some level of manual effort. This issue is compounded by the proliferation of frameworks and hardware platforms. The current approach, which we call "direct optimization", requires deep changes within each framework to improve the training performance for each hardware backend (CPUs, GPUs, FPGAs, ASICs) and requires \(\mathcal{O}(fp)\) effort; where \(f\) is the number of frameworks and \(p\) is the number of platforms. While optimized kernels for deep-learning primitives are provided via libraries like Intel Math Kernel Library for Deep Neural Networks (MKL-DNN), there are several compiler-inspired ways in which performance can be further optimized. Building on our experience creating neon (a fast deep learning library on GPUs), we developed Intel nGraph, a soon to be open-sourced C++ library to simplify the realization of optimized deep learning performance across frameworks and hardware platforms. Initially-supported frameworks include TensorFlow, MXNet, and Intel neon framework. Initial backends are Intel Architecture CPUs (CPU), the Intel(R) Nervana Neural Network Processor(R) (NNP), and NVIDIA GPUs. Currently supported compiler optimizations include efficient memory management and data layout abstraction. In this paper, we describe our overall architecture and its core components. In the future, we envision extending nGraph API support to a wider range of frameworks, hardware (including FPGAs and ASICs), and compiler optimizations (training versus inference optimizations, multi-node and multi-device scaling via efficient sub-graph partitioning, and HW-specific compounding of operations).</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2018-01
issn 2331-8422
language eng
recordid cdi_proquest_journals_2071286873
source Free E- Journals
subjects Central processing units
Computer architecture
CPUs
Deep learning
Field programmable gate arrays
Graphics processing units
Hardware
Kernels
Libraries
Memory management
Microprocessors
Neon
Neural networks
Optimization
Platforms
Topology
Training
title Intel nGraph: An Intermediate Representation, Compiler, and Executor for Deep Learning
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-06T06%3A35%3A56IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Intel%20nGraph:%20An%20Intermediate%20Representation,%20Compiler,%20and%20Executor%20for%20Deep%20Learning&rft.jtitle=arXiv.org&rft.au=Cyphers,%20Scott&rft.date=2018-01-30&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2071286873%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2071286873&rft_id=info:pmid/&rfr_iscdi=true