QONNX: Representing Arbitrary-Precision Quantized Neural Networks

We present extensions to the Open Neural Network Exchange (ONNX) intermediate representation format to represent arbitrary-precision quantized neural networks. We first introduce support for low precision quantization in existing ONNX-based quantization formats by leveraging integer clipping, result...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Pappalardo, Alessandro, Umuroglu, Yaman, Blott, Michaela, Mitrevski, Jovan, Hawks, Ben, Tran, Nhan, Loncar, Vladimir, Summers, Sioni, Borras, Hendrik, Muhizi, Jules, Trahms, Matthew, Hsu, Shih-Chieh, Hauck, Scott, Duarte, Javier
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Pappalardo, Alessandro
Umuroglu, Yaman
Blott, Michaela
Mitrevski, Jovan
Hawks, Ben
Tran, Nhan
Loncar, Vladimir
Summers, Sioni
Borras, Hendrik
Muhizi, Jules
Trahms, Matthew
Hsu, Shih-Chieh
Hauck, Scott
Duarte, Javier
description We present extensions to the Open Neural Network Exchange (ONNX) intermediate representation format to represent arbitrary-precision quantized neural networks. We first introduce support for low precision quantization in existing ONNX-based quantization formats by leveraging integer clipping, resulting in two new backward-compatible variants: the quantized operator format with clipping and quantize-clip-dequantize (QCDQ) format. We then introduce a novel higher-level ONNX format called quantized ONNX (QONNX) that introduces three new operators -- Quant, BipolarQuant, and Trunc -- in order to represent uniform quantization. By keeping the QONNX IR high-level and flexible, we enable targeting a wider variety of platforms. We also present utilities for working with QONNX, as well as examples of its usage in the FINN and hls4ml toolchains. Finally, we introduce the QONNX model zoo to share low-precision quantized neural networks.
doi_str_mv 10.48550/arxiv.2206.07527
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2206_07527</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2206_07527</sourcerecordid><originalsourceid>FETCH-LOGICAL-a677-a78e6b2c94c8bcb185e6d2852003abae9d0bd585b8ffc81bfacd56f22de5bc543</originalsourceid><addsrcrecordid>eNotj8tOwzAQRb1hgQofwIr8QILjZGyXXVTxkqqUoi7YRWN7jCxKWk1SXl9PKKzO4kpX5whxUcqitgDyCvkzvRdKSV1IA8qcima9atvn6-yJ9kwD9WPqX7KGXRoZ-St_ZPJpSLs-Wx9wGr8pZC0dGLcTxo8dvw5n4iTidqDzf87E5vZms7jPl6u7h0WzzFEbk6OxpJ3y89pb511pgXRQFpSUFTqkeZAugAVnY_S2dBF9AB2VCgTOQ13NxOXf7bGh23N6mwS735bu2FL9AK4TRYA</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>QONNX: Representing Arbitrary-Precision Quantized Neural Networks</title><source>arXiv.org</source><creator>Pappalardo, Alessandro ; Umuroglu, Yaman ; Blott, Michaela ; Mitrevski, Jovan ; Hawks, Ben ; Tran, Nhan ; Loncar, Vladimir ; Summers, Sioni ; Borras, Hendrik ; Muhizi, Jules ; Trahms, Matthew ; Hsu, Shih-Chieh ; Hauck, Scott ; Duarte, Javier</creator><creatorcontrib>Pappalardo, Alessandro ; Umuroglu, Yaman ; Blott, Michaela ; Mitrevski, Jovan ; Hawks, Ben ; Tran, Nhan ; Loncar, Vladimir ; Summers, Sioni ; Borras, Hendrik ; Muhizi, Jules ; Trahms, Matthew ; Hsu, Shih-Chieh ; Hauck, Scott ; Duarte, Javier</creatorcontrib><description>We present extensions to the Open Neural Network Exchange (ONNX) intermediate representation format to represent arbitrary-precision quantized neural networks. We first introduce support for low precision quantization in existing ONNX-based quantization formats by leveraging integer clipping, resulting in two new backward-compatible variants: the quantized operator format with clipping and quantize-clip-dequantize (QCDQ) format. We then introduce a novel higher-level ONNX format called quantized ONNX (QONNX) that introduces three new operators -- Quant, BipolarQuant, and Trunc -- in order to represent uniform quantization. By keeping the QONNX IR high-level and flexible, we enable targeting a wider variety of platforms. We also present utilities for working with QONNX, as well as examples of its usage in the FINN and hls4ml toolchains. Finally, we introduce the QONNX model zoo to share low-precision quantized neural networks.</description><identifier>DOI: 10.48550/arxiv.2206.07527</identifier><language>eng</language><subject>Computer Science - Hardware Architecture ; Computer Science - Learning ; Computer Science - Programming Languages ; Statistics - Machine Learning</subject><creationdate>2022-06</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2206.07527$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2206.07527$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Pappalardo, Alessandro</creatorcontrib><creatorcontrib>Umuroglu, Yaman</creatorcontrib><creatorcontrib>Blott, Michaela</creatorcontrib><creatorcontrib>Mitrevski, Jovan</creatorcontrib><creatorcontrib>Hawks, Ben</creatorcontrib><creatorcontrib>Tran, Nhan</creatorcontrib><creatorcontrib>Loncar, Vladimir</creatorcontrib><creatorcontrib>Summers, Sioni</creatorcontrib><creatorcontrib>Borras, Hendrik</creatorcontrib><creatorcontrib>Muhizi, Jules</creatorcontrib><creatorcontrib>Trahms, Matthew</creatorcontrib><creatorcontrib>Hsu, Shih-Chieh</creatorcontrib><creatorcontrib>Hauck, Scott</creatorcontrib><creatorcontrib>Duarte, Javier</creatorcontrib><title>QONNX: Representing Arbitrary-Precision Quantized Neural Networks</title><description>We present extensions to the Open Neural Network Exchange (ONNX) intermediate representation format to represent arbitrary-precision quantized neural networks. We first introduce support for low precision quantization in existing ONNX-based quantization formats by leveraging integer clipping, resulting in two new backward-compatible variants: the quantized operator format with clipping and quantize-clip-dequantize (QCDQ) format. We then introduce a novel higher-level ONNX format called quantized ONNX (QONNX) that introduces three new operators -- Quant, BipolarQuant, and Trunc -- in order to represent uniform quantization. By keeping the QONNX IR high-level and flexible, we enable targeting a wider variety of platforms. We also present utilities for working with QONNX, as well as examples of its usage in the FINN and hls4ml toolchains. Finally, we introduce the QONNX model zoo to share low-precision quantized neural networks.</description><subject>Computer Science - Hardware Architecture</subject><subject>Computer Science - Learning</subject><subject>Computer Science - Programming Languages</subject><subject>Statistics - Machine Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj8tOwzAQRb1hgQofwIr8QILjZGyXXVTxkqqUoi7YRWN7jCxKWk1SXl9PKKzO4kpX5whxUcqitgDyCvkzvRdKSV1IA8qcima9atvn6-yJ9kwD9WPqX7KGXRoZ-St_ZPJpSLs-Wx9wGr8pZC0dGLcTxo8dvw5n4iTidqDzf87E5vZms7jPl6u7h0WzzFEbk6OxpJ3y89pb511pgXRQFpSUFTqkeZAugAVnY_S2dBF9AB2VCgTOQ13NxOXf7bGh23N6mwS735bu2FL9AK4TRYA</recordid><startdate>20220615</startdate><enddate>20220615</enddate><creator>Pappalardo, Alessandro</creator><creator>Umuroglu, Yaman</creator><creator>Blott, Michaela</creator><creator>Mitrevski, Jovan</creator><creator>Hawks, Ben</creator><creator>Tran, Nhan</creator><creator>Loncar, Vladimir</creator><creator>Summers, Sioni</creator><creator>Borras, Hendrik</creator><creator>Muhizi, Jules</creator><creator>Trahms, Matthew</creator><creator>Hsu, Shih-Chieh</creator><creator>Hauck, Scott</creator><creator>Duarte, Javier</creator><scope>AKY</scope><scope>EPD</scope><scope>GOX</scope></search><sort><creationdate>20220615</creationdate><title>QONNX: Representing Arbitrary-Precision Quantized Neural Networks</title><author>Pappalardo, Alessandro ; Umuroglu, Yaman ; Blott, Michaela ; Mitrevski, Jovan ; Hawks, Ben ; Tran, Nhan ; Loncar, Vladimir ; Summers, Sioni ; Borras, Hendrik ; Muhizi, Jules ; Trahms, Matthew ; Hsu, Shih-Chieh ; Hauck, Scott ; Duarte, Javier</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a677-a78e6b2c94c8bcb185e6d2852003abae9d0bd585b8ffc81bfacd56f22de5bc543</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Computer Science - Hardware Architecture</topic><topic>Computer Science - Learning</topic><topic>Computer Science - Programming Languages</topic><topic>Statistics - Machine Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Pappalardo, Alessandro</creatorcontrib><creatorcontrib>Umuroglu, Yaman</creatorcontrib><creatorcontrib>Blott, Michaela</creatorcontrib><creatorcontrib>Mitrevski, Jovan</creatorcontrib><creatorcontrib>Hawks, Ben</creatorcontrib><creatorcontrib>Tran, Nhan</creatorcontrib><creatorcontrib>Loncar, Vladimir</creatorcontrib><creatorcontrib>Summers, Sioni</creatorcontrib><creatorcontrib>Borras, Hendrik</creatorcontrib><creatorcontrib>Muhizi, Jules</creatorcontrib><creatorcontrib>Trahms, Matthew</creatorcontrib><creatorcontrib>Hsu, Shih-Chieh</creatorcontrib><creatorcontrib>Hauck, Scott</creatorcontrib><creatorcontrib>Duarte, Javier</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv Statistics</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Pappalardo, Alessandro</au><au>Umuroglu, Yaman</au><au>Blott, Michaela</au><au>Mitrevski, Jovan</au><au>Hawks, Ben</au><au>Tran, Nhan</au><au>Loncar, Vladimir</au><au>Summers, Sioni</au><au>Borras, Hendrik</au><au>Muhizi, Jules</au><au>Trahms, Matthew</au><au>Hsu, Shih-Chieh</au><au>Hauck, Scott</au><au>Duarte, Javier</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>QONNX: Representing Arbitrary-Precision Quantized Neural Networks</atitle><date>2022-06-15</date><risdate>2022</risdate><abstract>We present extensions to the Open Neural Network Exchange (ONNX) intermediate representation format to represent arbitrary-precision quantized neural networks. We first introduce support for low precision quantization in existing ONNX-based quantization formats by leveraging integer clipping, resulting in two new backward-compatible variants: the quantized operator format with clipping and quantize-clip-dequantize (QCDQ) format. We then introduce a novel higher-level ONNX format called quantized ONNX (QONNX) that introduces three new operators -- Quant, BipolarQuant, and Trunc -- in order to represent uniform quantization. By keeping the QONNX IR high-level and flexible, we enable targeting a wider variety of platforms. We also present utilities for working with QONNX, as well as examples of its usage in the FINN and hls4ml toolchains. Finally, we introduce the QONNX model zoo to share low-precision quantized neural networks.</abstract><doi>10.48550/arxiv.2206.07527</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2206.07527
ispartof
issn
language eng
recordid cdi_arxiv_primary_2206_07527
source arXiv.org
subjects Computer Science - Hardware Architecture
Computer Science - Learning
Computer Science - Programming Languages
Statistics - Machine Learning
title QONNX: Representing Arbitrary-Precision Quantized Neural Networks
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-29T09%3A55%3A22IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=QONNX:%20Representing%20Arbitrary-Precision%20Quantized%20Neural%20Networks&rft.au=Pappalardo,%20Alessandro&rft.date=2022-06-15&rft_id=info:doi/10.48550/arxiv.2206.07527&rft_dat=%3Carxiv_GOX%3E2206_07527%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true