Open-source FPGA-ML codesign for the MLPerf Tiny Benchmark

We present our development experience and recent results for the MLPerf Tiny Inference Benchmark on field-programmable gate array (FPGA) platforms. We use the open-source hls4ml and FINN workflows, which aim to democratize AI-hardware codesign of optimized neural networks on FPGAs. We present the de...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2022-06
Hauptverfasser: Borras, Hendrik, Giuseppe Di Guglielmo, Duarte, Javier, Ghielmetti, Nicolò, Hawks, Ben, Hauck, Scott, Hsu, Shih-Chieh, Kastner, Ryan, Liang, Jason, Meza, Andres, Muhizi, Jules, Nguyen, Tai, Rushil Roy, Tran, Nhan, Umuroglu, Yaman, Weng, Olivia, Yokuda, Aidan, Blott, Michaela
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Borras, Hendrik
Giuseppe Di Guglielmo
Duarte, Javier
Ghielmetti, Nicolò
Hawks, Ben
Hauck, Scott
Hsu, Shih-Chieh
Kastner, Ryan
Liang, Jason
Meza, Andres
Muhizi, Jules
Nguyen, Tai
Rushil Roy
Tran, Nhan
Umuroglu, Yaman
Weng, Olivia
Yokuda, Aidan
Blott, Michaela
description We present our development experience and recent results for the MLPerf Tiny Inference Benchmark on field-programmable gate array (FPGA) platforms. We use the open-source hls4ml and FINN workflows, which aim to democratize AI-hardware codesign of optimized neural networks on FPGAs. We present the design and implementation process for the keyword spotting, anomaly detection, and image classification benchmark tasks. The resulting hardware implementations are quantized, configurable, spatial dataflow architectures tailored for speed and efficiency and introduce new generic optimizations and common workflows developed as a part of this work. The full workflow is presented from quantization-aware training to FPGA implementation. The solutions are deployed on system-on-chip (Pynq-Z2) and pure FPGA (Arty A7-100T) platforms. The resulting submissions achieve latencies as low as 20 \(\mu\)s and energy consumption as low as 30 \(\mu\)J per inference. We demonstrate how emerging ML benchmarks on heterogeneous hardware platforms can catalyze collaboration and the development of new techniques and more accessible tools.
format Article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2680441418</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2680441418</sourcerecordid><originalsourceid>FETCH-proquest_journals_26804414183</originalsourceid><addsrcrecordid>eNpjYuA0MjY21LUwMTLiYOAtLs4yMDAwMjM3MjU15mSw8i9IzdMtzi8tSk5VcAtwd9T19VFIzk9JLc5Mz1NIyy9SKMlIVfD1CUgtSlMIycyrVHBKzUvOyE0syuZhYE1LzClO5YXS3AzKbq4hzh66BUX5haWpxSXxWUBj84BS8UZmFgYmJoYmhhbGxKkCAAFqNUg</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2680441418</pqid></control><display><type>article</type><title>Open-source FPGA-ML codesign for the MLPerf Tiny Benchmark</title><source>Open Access: Freely Accessible Journals by multiple vendors</source><creator>Borras, Hendrik ; Giuseppe Di Guglielmo ; Duarte, Javier ; Ghielmetti, Nicolò ; Hawks, Ben ; Hauck, Scott ; Hsu, Shih-Chieh ; Kastner, Ryan ; Liang, Jason ; Meza, Andres ; Muhizi, Jules ; Nguyen, Tai ; Rushil Roy ; Tran, Nhan ; Umuroglu, Yaman ; Weng, Olivia ; Yokuda, Aidan ; Blott, Michaela</creator><creatorcontrib>Borras, Hendrik ; Giuseppe Di Guglielmo ; Duarte, Javier ; Ghielmetti, Nicolò ; Hawks, Ben ; Hauck, Scott ; Hsu, Shih-Chieh ; Kastner, Ryan ; Liang, Jason ; Meza, Andres ; Muhizi, Jules ; Nguyen, Tai ; Rushil Roy ; Tran, Nhan ; Umuroglu, Yaman ; Weng, Olivia ; Yokuda, Aidan ; Blott, Michaela</creatorcontrib><description>We present our development experience and recent results for the MLPerf Tiny Inference Benchmark on field-programmable gate array (FPGA) platforms. We use the open-source hls4ml and FINN workflows, which aim to democratize AI-hardware codesign of optimized neural networks on FPGAs. We present the design and implementation process for the keyword spotting, anomaly detection, and image classification benchmark tasks. The resulting hardware implementations are quantized, configurable, spatial dataflow architectures tailored for speed and efficiency and introduce new generic optimizations and common workflows developed as a part of this work. The full workflow is presented from quantization-aware training to FPGA implementation. The solutions are deployed on system-on-chip (Pynq-Z2) and pure FPGA (Arty A7-100T) platforms. The resulting submissions achieve latencies as low as 20 \(\mu\)s and energy consumption as low as 30 \(\mu\)J per inference. We demonstrate how emerging ML benchmarks on heterogeneous hardware platforms can catalyze collaboration and the development of new techniques and more accessible tools.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Anomalies ; Benchmarks ; Co-design ; Energy consumption ; Field programmable gate arrays ; Hardware ; Image classification ; Inference ; Neural networks ; Platforms ; System on chip ; Workflow</subject><ispartof>arXiv.org, 2022-06</ispartof><rights>2022. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>776,780</link.rule.ids></links><search><creatorcontrib>Borras, Hendrik</creatorcontrib><creatorcontrib>Giuseppe Di Guglielmo</creatorcontrib><creatorcontrib>Duarte, Javier</creatorcontrib><creatorcontrib>Ghielmetti, Nicolò</creatorcontrib><creatorcontrib>Hawks, Ben</creatorcontrib><creatorcontrib>Hauck, Scott</creatorcontrib><creatorcontrib>Hsu, Shih-Chieh</creatorcontrib><creatorcontrib>Kastner, Ryan</creatorcontrib><creatorcontrib>Liang, Jason</creatorcontrib><creatorcontrib>Meza, Andres</creatorcontrib><creatorcontrib>Muhizi, Jules</creatorcontrib><creatorcontrib>Nguyen, Tai</creatorcontrib><creatorcontrib>Rushil Roy</creatorcontrib><creatorcontrib>Tran, Nhan</creatorcontrib><creatorcontrib>Umuroglu, Yaman</creatorcontrib><creatorcontrib>Weng, Olivia</creatorcontrib><creatorcontrib>Yokuda, Aidan</creatorcontrib><creatorcontrib>Blott, Michaela</creatorcontrib><title>Open-source FPGA-ML codesign for the MLPerf Tiny Benchmark</title><title>arXiv.org</title><description>We present our development experience and recent results for the MLPerf Tiny Inference Benchmark on field-programmable gate array (FPGA) platforms. We use the open-source hls4ml and FINN workflows, which aim to democratize AI-hardware codesign of optimized neural networks on FPGAs. We present the design and implementation process for the keyword spotting, anomaly detection, and image classification benchmark tasks. The resulting hardware implementations are quantized, configurable, spatial dataflow architectures tailored for speed and efficiency and introduce new generic optimizations and common workflows developed as a part of this work. The full workflow is presented from quantization-aware training to FPGA implementation. The solutions are deployed on system-on-chip (Pynq-Z2) and pure FPGA (Arty A7-100T) platforms. The resulting submissions achieve latencies as low as 20 \(\mu\)s and energy consumption as low as 30 \(\mu\)J per inference. We demonstrate how emerging ML benchmarks on heterogeneous hardware platforms can catalyze collaboration and the development of new techniques and more accessible tools.</description><subject>Anomalies</subject><subject>Benchmarks</subject><subject>Co-design</subject><subject>Energy consumption</subject><subject>Field programmable gate arrays</subject><subject>Hardware</subject><subject>Image classification</subject><subject>Inference</subject><subject>Neural networks</subject><subject>Platforms</subject><subject>System on chip</subject><subject>Workflow</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>BENPR</sourceid><recordid>eNpjYuA0MjY21LUwMTLiYOAtLs4yMDAwMjM3MjU15mSw8i9IzdMtzi8tSk5VcAtwd9T19VFIzk9JLc5Mz1NIyy9SKMlIVfD1CUgtSlMIycyrVHBKzUvOyE0syuZhYE1LzClO5YXS3AzKbq4hzh66BUX5haWpxSXxWUBj84BS8UZmFgYmJoYmhhbGxKkCAAFqNUg</recordid><startdate>20220623</startdate><enddate>20220623</enddate><creator>Borras, Hendrik</creator><creator>Giuseppe Di Guglielmo</creator><creator>Duarte, Javier</creator><creator>Ghielmetti, Nicolò</creator><creator>Hawks, Ben</creator><creator>Hauck, Scott</creator><creator>Hsu, Shih-Chieh</creator><creator>Kastner, Ryan</creator><creator>Liang, Jason</creator><creator>Meza, Andres</creator><creator>Muhizi, Jules</creator><creator>Nguyen, Tai</creator><creator>Rushil Roy</creator><creator>Tran, Nhan</creator><creator>Umuroglu, Yaman</creator><creator>Weng, Olivia</creator><creator>Yokuda, Aidan</creator><creator>Blott, Michaela</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20220623</creationdate><title>Open-source FPGA-ML codesign for the MLPerf Tiny Benchmark</title><author>Borras, Hendrik ; Giuseppe Di Guglielmo ; Duarte, Javier ; Ghielmetti, Nicolò ; Hawks, Ben ; Hauck, Scott ; Hsu, Shih-Chieh ; Kastner, Ryan ; Liang, Jason ; Meza, Andres ; Muhizi, Jules ; Nguyen, Tai ; Rushil Roy ; Tran, Nhan ; Umuroglu, Yaman ; Weng, Olivia ; Yokuda, Aidan ; Blott, Michaela</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_26804414183</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Anomalies</topic><topic>Benchmarks</topic><topic>Co-design</topic><topic>Energy consumption</topic><topic>Field programmable gate arrays</topic><topic>Hardware</topic><topic>Image classification</topic><topic>Inference</topic><topic>Neural networks</topic><topic>Platforms</topic><topic>System on chip</topic><topic>Workflow</topic><toplevel>online_resources</toplevel><creatorcontrib>Borras, Hendrik</creatorcontrib><creatorcontrib>Giuseppe Di Guglielmo</creatorcontrib><creatorcontrib>Duarte, Javier</creatorcontrib><creatorcontrib>Ghielmetti, Nicolò</creatorcontrib><creatorcontrib>Hawks, Ben</creatorcontrib><creatorcontrib>Hauck, Scott</creatorcontrib><creatorcontrib>Hsu, Shih-Chieh</creatorcontrib><creatorcontrib>Kastner, Ryan</creatorcontrib><creatorcontrib>Liang, Jason</creatorcontrib><creatorcontrib>Meza, Andres</creatorcontrib><creatorcontrib>Muhizi, Jules</creatorcontrib><creatorcontrib>Nguyen, Tai</creatorcontrib><creatorcontrib>Rushil Roy</creatorcontrib><creatorcontrib>Tran, Nhan</creatorcontrib><creatorcontrib>Umuroglu, Yaman</creatorcontrib><creatorcontrib>Weng, Olivia</creatorcontrib><creatorcontrib>Yokuda, Aidan</creatorcontrib><creatorcontrib>Blott, Michaela</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>ProQuest Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Borras, Hendrik</au><au>Giuseppe Di Guglielmo</au><au>Duarte, Javier</au><au>Ghielmetti, Nicolò</au><au>Hawks, Ben</au><au>Hauck, Scott</au><au>Hsu, Shih-Chieh</au><au>Kastner, Ryan</au><au>Liang, Jason</au><au>Meza, Andres</au><au>Muhizi, Jules</au><au>Nguyen, Tai</au><au>Rushil Roy</au><au>Tran, Nhan</au><au>Umuroglu, Yaman</au><au>Weng, Olivia</au><au>Yokuda, Aidan</au><au>Blott, Michaela</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Open-source FPGA-ML codesign for the MLPerf Tiny Benchmark</atitle><jtitle>arXiv.org</jtitle><date>2022-06-23</date><risdate>2022</risdate><eissn>2331-8422</eissn><abstract>We present our development experience and recent results for the MLPerf Tiny Inference Benchmark on field-programmable gate array (FPGA) platforms. We use the open-source hls4ml and FINN workflows, which aim to democratize AI-hardware codesign of optimized neural networks on FPGAs. We present the design and implementation process for the keyword spotting, anomaly detection, and image classification benchmark tasks. The resulting hardware implementations are quantized, configurable, spatial dataflow architectures tailored for speed and efficiency and introduce new generic optimizations and common workflows developed as a part of this work. The full workflow is presented from quantization-aware training to FPGA implementation. The solutions are deployed on system-on-chip (Pynq-Z2) and pure FPGA (Arty A7-100T) platforms. The resulting submissions achieve latencies as low as 20 \(\mu\)s and energy consumption as low as 30 \(\mu\)J per inference. We demonstrate how emerging ML benchmarks on heterogeneous hardware platforms can catalyze collaboration and the development of new techniques and more accessible tools.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2022-06
issn 2331-8422
language eng
recordid cdi_proquest_journals_2680441418
source Open Access: Freely Accessible Journals by multiple vendors
subjects Anomalies
Benchmarks
Co-design
Energy consumption
Field programmable gate arrays
Hardware
Image classification
Inference
Neural networks
Platforms
System on chip
Workflow
title Open-source FPGA-ML codesign for the MLPerf Tiny Benchmark
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-29T09%3A53%3A56IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Open-source%20FPGA-ML%20codesign%20for%20the%20MLPerf%20Tiny%20Benchmark&rft.jtitle=arXiv.org&rft.au=Borras,%20Hendrik&rft.date=2022-06-23&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2680441418%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2680441418&rft_id=info:pmid/&rfr_iscdi=true