Logic Shrinkage: Learned FPGA Netlist Sparsity for Efficient Neural Network Inference
FPGA-specific DNN architectures using the native LUTs as independently trainable inference operators have been shown to achieve favorable area-accuracy and energy-accuracy tradeoffs. The first work in this area, LUTNet, exhibited state-of-the-art performance for standard DNN benchmarks. In this pape...
Gespeichert in:
Veröffentlicht in: | arXiv.org 2022-01 |
---|---|
Hauptverfasser: | , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Wang, Erwei Davis, James J Georgios-Ilias Stavrou Cheung, Peter Y K Constantinides, George A Abdelfattah, Mohamed S |
description | FPGA-specific DNN architectures using the native LUTs as independently trainable inference operators have been shown to achieve favorable area-accuracy and energy-accuracy tradeoffs. The first work in this area, LUTNet, exhibited state-of-the-art performance for standard DNN benchmarks. In this paper, we propose the learned optimization of such LUT-based topologies, resulting in higher-efficiency designs than via the direct use of off-the-shelf, hand-designed networks. Existing implementations of this class of architecture require the manual specification of the number of inputs per LUT, K. Choosing appropriate K a priori is challenging, and doing so at even high granularity, e.g. per layer, is a time-consuming and error-prone process that leaves FPGAs' spatial flexibility underexploited. Furthermore, prior works see LUT inputs connected randomly, which does not guarantee a good choice of network topology. To address these issues, we propose logic shrinkage, a fine-grained netlist pruning methodology enabling K to be automatically learned for every LUT in a neural network targeted for FPGA inference. By removing LUT inputs determined to be of low importance, our method increases the efficiency of the resultant accelerators. Our GPU-friendly solution to LUT input removal is capable of processing large topologies during their training with negligible slowdown. With logic shrinkage, we better the area and energy efficiency of the best-performing LUTNet implementation of the CNV network classifying CIFAR-10 by 1.54x and 1.31x, respectively, while matching its accuracy. This implementation also reaches 2.71x the area efficiency of an equally accurate, heavily pruned BNN. On ImageNet with the Bi-Real Net architecture, employment of logic shrinkage results in a post-synthesis area reduction of 2.67x vs LUTNet, allowing for implementation that was previously impossible on today's largest FPGAs. |
doi_str_mv | 10.48550/arxiv.2112.02346 |
format | Article |
fullrecord | <record><control><sourceid>proquest_arxiv</sourceid><recordid>TN_cdi_arxiv_primary_2112_02346</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2607473678</sourcerecordid><originalsourceid>FETCH-LOGICAL-a528-562c8c6028c1f5774b0bfc0718576b069f2a88c64de45ad21ef50e90ce4723bf3</originalsourceid><addsrcrecordid>eNotj01PwkAURScmJhLkB7hyEtfF6ZtP3BECSNKoCbhuptM3OIAtTovKv7eAq7u4Jzf3EHKXsqEwUrJHG3_D9xDSFIYMuFBXpAecp4kRADdk0DQbxhgoDVLyHnnP6nVwdPkRQ7W1a3yiGdpYYUlnb_MxfcF2F5qWLvc2NqE9Ul9HOvU-uIBV29WHaHcn6qeOW7qoPEasHN6Sa293DQ7-s09Ws-lq8pxkr_PFZJwlVoJJpAJnnGJgXOql1qJghXdMp0ZqVTA18mBNB4gShbQlpOglwxFzKDTwwvM-ub_Mnp3zfQyfNh7zk3t-du-Ihwuxj_XXAZs239SHWHWfclBMC82VNvwPfilbXA</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2607473678</pqid></control><display><type>article</type><title>Logic Shrinkage: Learned FPGA Netlist Sparsity for Efficient Neural Network Inference</title><source>arXiv.org</source><source>Free E- Journals</source><creator>Wang, Erwei ; Davis, James J ; Georgios-Ilias Stavrou ; Cheung, Peter Y K ; Constantinides, George A ; Abdelfattah, Mohamed S</creator><creatorcontrib>Wang, Erwei ; Davis, James J ; Georgios-Ilias Stavrou ; Cheung, Peter Y K ; Constantinides, George A ; Abdelfattah, Mohamed S</creatorcontrib><description>FPGA-specific DNN architectures using the native LUTs as independently trainable inference operators have been shown to achieve favorable area-accuracy and energy-accuracy tradeoffs. The first work in this area, LUTNet, exhibited state-of-the-art performance for standard DNN benchmarks. In this paper, we propose the learned optimization of such LUT-based topologies, resulting in higher-efficiency designs than via the direct use of off-the-shelf, hand-designed networks. Existing implementations of this class of architecture require the manual specification of the number of inputs per LUT, K. Choosing appropriate K a priori is challenging, and doing so at even high granularity, e.g. per layer, is a time-consuming and error-prone process that leaves FPGAs' spatial flexibility underexploited. Furthermore, prior works see LUT inputs connected randomly, which does not guarantee a good choice of network topology. To address these issues, we propose logic shrinkage, a fine-grained netlist pruning methodology enabling K to be automatically learned for every LUT in a neural network targeted for FPGA inference. By removing LUT inputs determined to be of low importance, our method increases the efficiency of the resultant accelerators. Our GPU-friendly solution to LUT input removal is capable of processing large topologies during their training with negligible slowdown. With logic shrinkage, we better the area and energy efficiency of the best-performing LUTNet implementation of the CNV network classifying CIFAR-10 by 1.54x and 1.31x, respectively, while matching its accuracy. This implementation also reaches 2.71x the area efficiency of an equally accurate, heavily pruned BNN. On ImageNet with the Bi-Real Net architecture, employment of logic shrinkage results in a post-synthesis area reduction of 2.67x vs LUTNet, allowing for implementation that was previously impossible on today's largest FPGAs.</description><identifier>EISSN: 2331-8422</identifier><identifier>DOI: 10.48550/arxiv.2112.02346</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Accuracy ; Computer architecture ; Computer Science - Hardware Architecture ; Computer Science - Learning ; Efficiency ; Field programmable gate arrays ; Inference ; Logic ; Network topologies ; Neural networks ; Optimization ; Shrinkage</subject><ispartof>arXiv.org, 2022-01</ispartof><rights>2022. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,780,881,27902</link.rule.ids><backlink>$$Uhttps://doi.org/10.48550/arXiv.2112.02346$$DView paper in arXiv$$Hfree_for_read</backlink><backlink>$$Uhttps://doi.org/10.1145/3490422.3502360$$DView published paper (Access to full text may be restricted)$$Hfree_for_read</backlink></links><search><creatorcontrib>Wang, Erwei</creatorcontrib><creatorcontrib>Davis, James J</creatorcontrib><creatorcontrib>Georgios-Ilias Stavrou</creatorcontrib><creatorcontrib>Cheung, Peter Y K</creatorcontrib><creatorcontrib>Constantinides, George A</creatorcontrib><creatorcontrib>Abdelfattah, Mohamed S</creatorcontrib><title>Logic Shrinkage: Learned FPGA Netlist Sparsity for Efficient Neural Network Inference</title><title>arXiv.org</title><description>FPGA-specific DNN architectures using the native LUTs as independently trainable inference operators have been shown to achieve favorable area-accuracy and energy-accuracy tradeoffs. The first work in this area, LUTNet, exhibited state-of-the-art performance for standard DNN benchmarks. In this paper, we propose the learned optimization of such LUT-based topologies, resulting in higher-efficiency designs than via the direct use of off-the-shelf, hand-designed networks. Existing implementations of this class of architecture require the manual specification of the number of inputs per LUT, K. Choosing appropriate K a priori is challenging, and doing so at even high granularity, e.g. per layer, is a time-consuming and error-prone process that leaves FPGAs' spatial flexibility underexploited. Furthermore, prior works see LUT inputs connected randomly, which does not guarantee a good choice of network topology. To address these issues, we propose logic shrinkage, a fine-grained netlist pruning methodology enabling K to be automatically learned for every LUT in a neural network targeted for FPGA inference. By removing LUT inputs determined to be of low importance, our method increases the efficiency of the resultant accelerators. Our GPU-friendly solution to LUT input removal is capable of processing large topologies during their training with negligible slowdown. With logic shrinkage, we better the area and energy efficiency of the best-performing LUTNet implementation of the CNV network classifying CIFAR-10 by 1.54x and 1.31x, respectively, while matching its accuracy. This implementation also reaches 2.71x the area efficiency of an equally accurate, heavily pruned BNN. On ImageNet with the Bi-Real Net architecture, employment of logic shrinkage results in a post-synthesis area reduction of 2.67x vs LUTNet, allowing for implementation that was previously impossible on today's largest FPGAs.</description><subject>Accuracy</subject><subject>Computer architecture</subject><subject>Computer Science - Hardware Architecture</subject><subject>Computer Science - Learning</subject><subject>Efficiency</subject><subject>Field programmable gate arrays</subject><subject>Inference</subject><subject>Logic</subject><subject>Network topologies</subject><subject>Neural networks</subject><subject>Optimization</subject><subject>Shrinkage</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>BENPR</sourceid><sourceid>GOX</sourceid><recordid>eNotj01PwkAURScmJhLkB7hyEtfF6ZtP3BECSNKoCbhuptM3OIAtTovKv7eAq7u4Jzf3EHKXsqEwUrJHG3_D9xDSFIYMuFBXpAecp4kRADdk0DQbxhgoDVLyHnnP6nVwdPkRQ7W1a3yiGdpYYUlnb_MxfcF2F5qWLvc2NqE9Ul9HOvU-uIBV29WHaHcn6qeOW7qoPEasHN6Sa293DQ7-s09Ws-lq8pxkr_PFZJwlVoJJpAJnnGJgXOql1qJghXdMp0ZqVTA18mBNB4gShbQlpOglwxFzKDTwwvM-ub_Mnp3zfQyfNh7zk3t-du-Ihwuxj_XXAZs239SHWHWfclBMC82VNvwPfilbXA</recordid><startdate>20220102</startdate><enddate>20220102</enddate><creator>Wang, Erwei</creator><creator>Davis, James J</creator><creator>Georgios-Ilias Stavrou</creator><creator>Cheung, Peter Y K</creator><creator>Constantinides, George A</creator><creator>Abdelfattah, Mohamed S</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20220102</creationdate><title>Logic Shrinkage: Learned FPGA Netlist Sparsity for Efficient Neural Network Inference</title><author>Wang, Erwei ; Davis, James J ; Georgios-Ilias Stavrou ; Cheung, Peter Y K ; Constantinides, George A ; Abdelfattah, Mohamed S</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a528-562c8c6028c1f5774b0bfc0718576b069f2a88c64de45ad21ef50e90ce4723bf3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Accuracy</topic><topic>Computer architecture</topic><topic>Computer Science - Hardware Architecture</topic><topic>Computer Science - Learning</topic><topic>Efficiency</topic><topic>Field programmable gate arrays</topic><topic>Inference</topic><topic>Logic</topic><topic>Network topologies</topic><topic>Neural networks</topic><topic>Optimization</topic><topic>Shrinkage</topic><toplevel>online_resources</toplevel><creatorcontrib>Wang, Erwei</creatorcontrib><creatorcontrib>Davis, James J</creatorcontrib><creatorcontrib>Georgios-Ilias Stavrou</creatorcontrib><creatorcontrib>Cheung, Peter Y K</creatorcontrib><creatorcontrib>Constantinides, George A</creatorcontrib><creatorcontrib>Abdelfattah, Mohamed S</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection><collection>arXiv Computer Science</collection><collection>arXiv.org</collection><jtitle>arXiv.org</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Wang, Erwei</au><au>Davis, James J</au><au>Georgios-Ilias Stavrou</au><au>Cheung, Peter Y K</au><au>Constantinides, George A</au><au>Abdelfattah, Mohamed S</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Logic Shrinkage: Learned FPGA Netlist Sparsity for Efficient Neural Network Inference</atitle><jtitle>arXiv.org</jtitle><date>2022-01-02</date><risdate>2022</risdate><eissn>2331-8422</eissn><abstract>FPGA-specific DNN architectures using the native LUTs as independently trainable inference operators have been shown to achieve favorable area-accuracy and energy-accuracy tradeoffs. The first work in this area, LUTNet, exhibited state-of-the-art performance for standard DNN benchmarks. In this paper, we propose the learned optimization of such LUT-based topologies, resulting in higher-efficiency designs than via the direct use of off-the-shelf, hand-designed networks. Existing implementations of this class of architecture require the manual specification of the number of inputs per LUT, K. Choosing appropriate K a priori is challenging, and doing so at even high granularity, e.g. per layer, is a time-consuming and error-prone process that leaves FPGAs' spatial flexibility underexploited. Furthermore, prior works see LUT inputs connected randomly, which does not guarantee a good choice of network topology. To address these issues, we propose logic shrinkage, a fine-grained netlist pruning methodology enabling K to be automatically learned for every LUT in a neural network targeted for FPGA inference. By removing LUT inputs determined to be of low importance, our method increases the efficiency of the resultant accelerators. Our GPU-friendly solution to LUT input removal is capable of processing large topologies during their training with negligible slowdown. With logic shrinkage, we better the area and energy efficiency of the best-performing LUTNet implementation of the CNV network classifying CIFAR-10 by 1.54x and 1.31x, respectively, while matching its accuracy. This implementation also reaches 2.71x the area efficiency of an equally accurate, heavily pruned BNN. On ImageNet with the Bi-Real Net architecture, employment of logic shrinkage results in a post-synthesis area reduction of 2.67x vs LUTNet, allowing for implementation that was previously impossible on today's largest FPGAs.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><doi>10.48550/arxiv.2112.02346</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2022-01 |
issn | 2331-8422 |
language | eng |
recordid | cdi_arxiv_primary_2112_02346 |
source | arXiv.org; Free E- Journals |
subjects | Accuracy Computer architecture Computer Science - Hardware Architecture Computer Science - Learning Efficiency Field programmable gate arrays Inference Logic Network topologies Neural networks Optimization Shrinkage |
title | Logic Shrinkage: Learned FPGA Netlist Sparsity for Efficient Neural Network Inference |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-31T06%3A55%3A48IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_arxiv&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Logic%20Shrinkage:%20Learned%20FPGA%20Netlist%20Sparsity%20for%20Efficient%20Neural%20Network%20Inference&rft.jtitle=arXiv.org&rft.au=Wang,%20Erwei&rft.date=2022-01-02&rft.eissn=2331-8422&rft_id=info:doi/10.48550/arxiv.2112.02346&rft_dat=%3Cproquest_arxiv%3E2607473678%3C/proquest_arxiv%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2607473678&rft_id=info:pmid/&rfr_iscdi=true |