A neural network accelerated optimization method for FPGA
A neural network accelerated optimization method for FPGA hardware platform is proposed. The method realizes the optimized deployment of neural network algorithms for FPGA hardware platforms from three aspects: computational speed, flexible transplantation, and development methods. Replacing multipl...
Gespeichert in:
Veröffentlicht in: | Journal of combinatorial optimization 2024-07, Vol.47 (5), Article 84 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | 5 |
container_start_page | |
container_title | Journal of combinatorial optimization |
container_volume | 47 |
creator | Hu, Zhengwei Zhu, Sijie Wang, Leilei Cao, Wangbin Xie, Zhiyuan |
description | A neural network accelerated optimization method for FPGA hardware platform is proposed. The method realizes the optimized deployment of neural network algorithms for FPGA hardware platforms from three aspects: computational speed, flexible transplantation, and development methods. Replacing multiplication based on Mitchell algorithm not only breaks through the speed bottleneck of neural network hardware acceleration caused by long multiplication period, but also makes the parallel acceleration of neural network algorithm get rid of the dependence on the number of hardware multipliers in FPGA, which can give full play to the advantages of FPGA parallel acceleration and maximize the computing speed. Based on the configurable strategy of neural network parameters, the number of network layers and nodes within layers can be adjusted according to different logical resource of FPGA, improving the flexibility of neural network transplantation. The adoption of HLS development method overcomes the shortcomings of RTL method in designing complex neural network algorithms, such as high difficulty in development and long development cycle. Using the Cyclone V SE 5CSEBA6U23I7 FPGA as the target device, a parameter configurable BP neural network was designed based on the proposed method. The usage of logical resources such as ALUT, Flip-Flop, RAM, and DSP were 39.6%, 40%, 56.9%, and 18.3% of the pre-optimized ones, respectively. The feasibility of the proposed method was verified using MNIST digital recognition and facial recognition as application scenarios. Compare to pre-optimization, the test time of MNIST number recognition is reduced to 67.58%, and the success rate was lost 0.195%. The test time for facial recognition applications was reduced to 69.571%, and the success rate of combining LDA algorithm was lost within 4%. |
doi_str_mv | 10.1007/s10878-024-01117-x |
format | Article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_3072275800</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3072275800</sourcerecordid><originalsourceid>FETCH-LOGICAL-c200t-97d7a6e141663ab627caff822cba75b30ae6682b9ad8398d1cbb04ea08a686103</originalsourceid><addsrcrecordid>eNp9kE9LwzAYxoMoOKdfwFPBc_RN0ibpcQw3hYEe9Bzepql2bs1MWpx-ejMrePP0PIfnD_wIuWRwzQDUTWSglabAcwqMMUX3R2TCCiUo11oeJy80p7KE4pScxbgGgOTzCSlnWeeGgJsk_YcPbxla6zYuYO_qzO_6dtt-Yd_6Ltu6_tXXWeNDtnhczs7JSYOb6C5-dUqeF7dP8zu6eljez2crajlAT0tVK5SO5UxKgZXkymLTaM5thaqoBKCTUvOqxFqLUtfMVhXkDkGj1JKBmJKrcXcX_PvgYm_WfghdujQCFOeq0HBI8TFlg48xuMbsQrvF8GkYmAMiMyIyCZH5QWT2qSTGUkzh7sWFv-l_Wt-ETmjs</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3072275800</pqid></control><display><type>article</type><title>A neural network accelerated optimization method for FPGA</title><source>SpringerLink Journals - AutoHoldings</source><creator>Hu, Zhengwei ; Zhu, Sijie ; Wang, Leilei ; Cao, Wangbin ; Xie, Zhiyuan</creator><creatorcontrib>Hu, Zhengwei ; Zhu, Sijie ; Wang, Leilei ; Cao, Wangbin ; Xie, Zhiyuan</creatorcontrib><description>A neural network accelerated optimization method for FPGA hardware platform is proposed. The method realizes the optimized deployment of neural network algorithms for FPGA hardware platforms from three aspects: computational speed, flexible transplantation, and development methods. Replacing multiplication based on Mitchell algorithm not only breaks through the speed bottleneck of neural network hardware acceleration caused by long multiplication period, but also makes the parallel acceleration of neural network algorithm get rid of the dependence on the number of hardware multipliers in FPGA, which can give full play to the advantages of FPGA parallel acceleration and maximize the computing speed. Based on the configurable strategy of neural network parameters, the number of network layers and nodes within layers can be adjusted according to different logical resource of FPGA, improving the flexibility of neural network transplantation. The adoption of HLS development method overcomes the shortcomings of RTL method in designing complex neural network algorithms, such as high difficulty in development and long development cycle. Using the Cyclone V SE 5CSEBA6U23I7 FPGA as the target device, a parameter configurable BP neural network was designed based on the proposed method. The usage of logical resources such as ALUT, Flip-Flop, RAM, and DSP were 39.6%, 40%, 56.9%, and 18.3% of the pre-optimized ones, respectively. The feasibility of the proposed method was verified using MNIST digital recognition and facial recognition as application scenarios. Compare to pre-optimization, the test time of MNIST number recognition is reduced to 67.58%, and the success rate was lost 0.195%. The test time for facial recognition applications was reduced to 69.571%, and the success rate of combining LDA algorithm was lost within 4%.</description><identifier>ISSN: 1382-6905</identifier><identifier>EISSN: 1573-2886</identifier><identifier>DOI: 10.1007/s10878-024-01117-x</identifier><language>eng</language><publisher>New York: Springer US</publisher><subject>Acceleration ; Algorithms ; Back propagation networks ; Combinatorics ; Convex and Discrete Geometry ; Face recognition ; Field programmable gate arrays ; Hardware ; Mathematical Modeling and Industrial Mathematics ; Mathematics ; Mathematics and Statistics ; Neural networks ; Operations Research/Decision Theory ; Optimization ; Parameters ; Testing time ; Theory of Computation ; Transplantation</subject><ispartof>Journal of combinatorial optimization, 2024-07, Vol.47 (5), Article 84</ispartof><rights>The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2024. Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c200t-97d7a6e141663ab627caff822cba75b30ae6682b9ad8398d1cbb04ea08a686103</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s10878-024-01117-x$$EPDF$$P50$$Gspringer$$H</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1007/s10878-024-01117-x$$EHTML$$P50$$Gspringer$$H</linktohtml><link.rule.ids>314,778,782,27907,27908,41471,42540,51302</link.rule.ids></links><search><creatorcontrib>Hu, Zhengwei</creatorcontrib><creatorcontrib>Zhu, Sijie</creatorcontrib><creatorcontrib>Wang, Leilei</creatorcontrib><creatorcontrib>Cao, Wangbin</creatorcontrib><creatorcontrib>Xie, Zhiyuan</creatorcontrib><title>A neural network accelerated optimization method for FPGA</title><title>Journal of combinatorial optimization</title><addtitle>J Comb Optim</addtitle><description>A neural network accelerated optimization method for FPGA hardware platform is proposed. The method realizes the optimized deployment of neural network algorithms for FPGA hardware platforms from three aspects: computational speed, flexible transplantation, and development methods. Replacing multiplication based on Mitchell algorithm not only breaks through the speed bottleneck of neural network hardware acceleration caused by long multiplication period, but also makes the parallel acceleration of neural network algorithm get rid of the dependence on the number of hardware multipliers in FPGA, which can give full play to the advantages of FPGA parallel acceleration and maximize the computing speed. Based on the configurable strategy of neural network parameters, the number of network layers and nodes within layers can be adjusted according to different logical resource of FPGA, improving the flexibility of neural network transplantation. The adoption of HLS development method overcomes the shortcomings of RTL method in designing complex neural network algorithms, such as high difficulty in development and long development cycle. Using the Cyclone V SE 5CSEBA6U23I7 FPGA as the target device, a parameter configurable BP neural network was designed based on the proposed method. The usage of logical resources such as ALUT, Flip-Flop, RAM, and DSP were 39.6%, 40%, 56.9%, and 18.3% of the pre-optimized ones, respectively. The feasibility of the proposed method was verified using MNIST digital recognition and facial recognition as application scenarios. Compare to pre-optimization, the test time of MNIST number recognition is reduced to 67.58%, and the success rate was lost 0.195%. The test time for facial recognition applications was reduced to 69.571%, and the success rate of combining LDA algorithm was lost within 4%.</description><subject>Acceleration</subject><subject>Algorithms</subject><subject>Back propagation networks</subject><subject>Combinatorics</subject><subject>Convex and Discrete Geometry</subject><subject>Face recognition</subject><subject>Field programmable gate arrays</subject><subject>Hardware</subject><subject>Mathematical Modeling and Industrial Mathematics</subject><subject>Mathematics</subject><subject>Mathematics and Statistics</subject><subject>Neural networks</subject><subject>Operations Research/Decision Theory</subject><subject>Optimization</subject><subject>Parameters</subject><subject>Testing time</subject><subject>Theory of Computation</subject><subject>Transplantation</subject><issn>1382-6905</issn><issn>1573-2886</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><recordid>eNp9kE9LwzAYxoMoOKdfwFPBc_RN0ibpcQw3hYEe9Bzepql2bs1MWpx-ejMrePP0PIfnD_wIuWRwzQDUTWSglabAcwqMMUX3R2TCCiUo11oeJy80p7KE4pScxbgGgOTzCSlnWeeGgJsk_YcPbxla6zYuYO_qzO_6dtt-Yd_6Ltu6_tXXWeNDtnhczs7JSYOb6C5-dUqeF7dP8zu6eljez2crajlAT0tVK5SO5UxKgZXkymLTaM5thaqoBKCTUvOqxFqLUtfMVhXkDkGj1JKBmJKrcXcX_PvgYm_WfghdujQCFOeq0HBI8TFlg48xuMbsQrvF8GkYmAMiMyIyCZH5QWT2qSTGUkzh7sWFv-l_Wt-ETmjs</recordid><startdate>20240701</startdate><enddate>20240701</enddate><creator>Hu, Zhengwei</creator><creator>Zhu, Sijie</creator><creator>Wang, Leilei</creator><creator>Cao, Wangbin</creator><creator>Xie, Zhiyuan</creator><general>Springer US</general><general>Springer Nature B.V</general><scope>AAYXX</scope><scope>CITATION</scope></search><sort><creationdate>20240701</creationdate><title>A neural network accelerated optimization method for FPGA</title><author>Hu, Zhengwei ; Zhu, Sijie ; Wang, Leilei ; Cao, Wangbin ; Xie, Zhiyuan</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c200t-97d7a6e141663ab627caff822cba75b30ae6682b9ad8398d1cbb04ea08a686103</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Acceleration</topic><topic>Algorithms</topic><topic>Back propagation networks</topic><topic>Combinatorics</topic><topic>Convex and Discrete Geometry</topic><topic>Face recognition</topic><topic>Field programmable gate arrays</topic><topic>Hardware</topic><topic>Mathematical Modeling and Industrial Mathematics</topic><topic>Mathematics</topic><topic>Mathematics and Statistics</topic><topic>Neural networks</topic><topic>Operations Research/Decision Theory</topic><topic>Optimization</topic><topic>Parameters</topic><topic>Testing time</topic><topic>Theory of Computation</topic><topic>Transplantation</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Hu, Zhengwei</creatorcontrib><creatorcontrib>Zhu, Sijie</creatorcontrib><creatorcontrib>Wang, Leilei</creatorcontrib><creatorcontrib>Cao, Wangbin</creatorcontrib><creatorcontrib>Xie, Zhiyuan</creatorcontrib><collection>CrossRef</collection><jtitle>Journal of combinatorial optimization</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Hu, Zhengwei</au><au>Zhu, Sijie</au><au>Wang, Leilei</au><au>Cao, Wangbin</au><au>Xie, Zhiyuan</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>A neural network accelerated optimization method for FPGA</atitle><jtitle>Journal of combinatorial optimization</jtitle><stitle>J Comb Optim</stitle><date>2024-07-01</date><risdate>2024</risdate><volume>47</volume><issue>5</issue><artnum>84</artnum><issn>1382-6905</issn><eissn>1573-2886</eissn><abstract>A neural network accelerated optimization method for FPGA hardware platform is proposed. The method realizes the optimized deployment of neural network algorithms for FPGA hardware platforms from three aspects: computational speed, flexible transplantation, and development methods. Replacing multiplication based on Mitchell algorithm not only breaks through the speed bottleneck of neural network hardware acceleration caused by long multiplication period, but also makes the parallel acceleration of neural network algorithm get rid of the dependence on the number of hardware multipliers in FPGA, which can give full play to the advantages of FPGA parallel acceleration and maximize the computing speed. Based on the configurable strategy of neural network parameters, the number of network layers and nodes within layers can be adjusted according to different logical resource of FPGA, improving the flexibility of neural network transplantation. The adoption of HLS development method overcomes the shortcomings of RTL method in designing complex neural network algorithms, such as high difficulty in development and long development cycle. Using the Cyclone V SE 5CSEBA6U23I7 FPGA as the target device, a parameter configurable BP neural network was designed based on the proposed method. The usage of logical resources such as ALUT, Flip-Flop, RAM, and DSP were 39.6%, 40%, 56.9%, and 18.3% of the pre-optimized ones, respectively. The feasibility of the proposed method was verified using MNIST digital recognition and facial recognition as application scenarios. Compare to pre-optimization, the test time of MNIST number recognition is reduced to 67.58%, and the success rate was lost 0.195%. The test time for facial recognition applications was reduced to 69.571%, and the success rate of combining LDA algorithm was lost within 4%.</abstract><cop>New York</cop><pub>Springer US</pub><doi>10.1007/s10878-024-01117-x</doi></addata></record> |
fulltext | fulltext |
identifier | ISSN: 1382-6905 |
ispartof | Journal of combinatorial optimization, 2024-07, Vol.47 (5), Article 84 |
issn | 1382-6905 1573-2886 |
language | eng |
recordid | cdi_proquest_journals_3072275800 |
source | SpringerLink Journals - AutoHoldings |
subjects | Acceleration Algorithms Back propagation networks Combinatorics Convex and Discrete Geometry Face recognition Field programmable gate arrays Hardware Mathematical Modeling and Industrial Mathematics Mathematics Mathematics and Statistics Neural networks Operations Research/Decision Theory Optimization Parameters Testing time Theory of Computation Transplantation |
title | A neural network accelerated optimization method for FPGA |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-17T01%3A03%3A44IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=A%20neural%20network%20accelerated%20optimization%20method%20for%20FPGA&rft.jtitle=Journal%20of%20combinatorial%20optimization&rft.au=Hu,%20Zhengwei&rft.date=2024-07-01&rft.volume=47&rft.issue=5&rft.artnum=84&rft.issn=1382-6905&rft.eissn=1573-2886&rft_id=info:doi/10.1007/s10878-024-01117-x&rft_dat=%3Cproquest_cross%3E3072275800%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3072275800&rft_id=info:pmid/&rfr_iscdi=true |