Computation Error Analysis of Block Floating Point Arithmetic Oriented Convolution Neural Network Accelerator Design
The heavy burdens of computation and off-chip traffic impede deploying the large scale convolution neural network on embedded platforms. As CNN is attributed to the strong endurance to computation errors, employing block floating point (BFP) arithmetics in CNN accelerators could save the hardware co...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Song, Zhourui Liu, Zhenyu Wang, Dongsheng |
description | The heavy burdens of computation and off-chip traffic impede deploying the
large scale convolution neural network on embedded platforms. As CNN is
attributed to the strong endurance to computation errors, employing block
floating point (BFP) arithmetics in CNN accelerators could save the hardware
cost and data traffics efficiently, while maintaining the classification
accuracy. In this paper, we verify the effects of word width definitions in BFP
to the CNN performance without retraining. Several typical CNN models,
including VGG16, ResNet-18, ResNet-50 and GoogLeNet, were tested in this paper.
Experiments revealed that 8-bit mantissa, including sign bit, in BFP
representation merely induced less than 0.3% accuracy loss. In addition, we
investigate the computational errors in theory and develop the noise-to-signal
ratio (NSR) upper bound, which provides the promising guidance for BFP based
CNN engine design. |
doi_str_mv | 10.48550/arxiv.1709.07776 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_1709_07776</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>1709_07776</sourcerecordid><originalsourceid>FETCH-LOGICAL-a676-62a134c1d39cc69bce9b0386d170099a43f2a58e0545139255728fdfc67ab51d3</originalsourceid><addsrcrecordid>eNotj8FSgzAYhHPx4FQfwJN5ATAQkpAjYqvOdKyH3pmfEGqmIemEUO3bi9TTXna_2Q-hh4ykRckYeYLwY85pJohMiRCC36JY--E0RYjGO7wOwQdcObCX0YzY9_jZenXEG-vngjvgT29cxFUw8WvQ0Si8C0a7qDtce3f2dlowH3oKYOeI3z4ccaWUtjpAnNkvejQHd4duerCjvv_PFdpv1vv6LdnuXt_rapsAFzzhOWS0UFlHpVJctkrLltCSd_N_IiUUtM-BlZqwgmVU5oyJvOy7XnEBLZtnK_R4xS7azSmYAcKl-dNvFn36C8rJVvw</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Computation Error Analysis of Block Floating Point Arithmetic Oriented Convolution Neural Network Accelerator Design</title><source>arXiv.org</source><creator>Song, Zhourui ; Liu, Zhenyu ; Wang, Dongsheng</creator><creatorcontrib>Song, Zhourui ; Liu, Zhenyu ; Wang, Dongsheng</creatorcontrib><description>The heavy burdens of computation and off-chip traffic impede deploying the
large scale convolution neural network on embedded platforms. As CNN is
attributed to the strong endurance to computation errors, employing block
floating point (BFP) arithmetics in CNN accelerators could save the hardware
cost and data traffics efficiently, while maintaining the classification
accuracy. In this paper, we verify the effects of word width definitions in BFP
to the CNN performance without retraining. Several typical CNN models,
including VGG16, ResNet-18, ResNet-50 and GoogLeNet, were tested in this paper.
Experiments revealed that 8-bit mantissa, including sign bit, in BFP
representation merely induced less than 0.3% accuracy loss. In addition, we
investigate the computational errors in theory and develop the noise-to-signal
ratio (NSR) upper bound, which provides the promising guidance for BFP based
CNN engine design.</description><identifier>DOI: 10.48550/arxiv.1709.07776</identifier><language>eng</language><subject>Computer Science - Learning</subject><creationdate>2017-09</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/1709.07776$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.1709.07776$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Song, Zhourui</creatorcontrib><creatorcontrib>Liu, Zhenyu</creatorcontrib><creatorcontrib>Wang, Dongsheng</creatorcontrib><title>Computation Error Analysis of Block Floating Point Arithmetic Oriented Convolution Neural Network Accelerator Design</title><description>The heavy burdens of computation and off-chip traffic impede deploying the
large scale convolution neural network on embedded platforms. As CNN is
attributed to the strong endurance to computation errors, employing block
floating point (BFP) arithmetics in CNN accelerators could save the hardware
cost and data traffics efficiently, while maintaining the classification
accuracy. In this paper, we verify the effects of word width definitions in BFP
to the CNN performance without retraining. Several typical CNN models,
including VGG16, ResNet-18, ResNet-50 and GoogLeNet, were tested in this paper.
Experiments revealed that 8-bit mantissa, including sign bit, in BFP
representation merely induced less than 0.3% accuracy loss. In addition, we
investigate the computational errors in theory and develop the noise-to-signal
ratio (NSR) upper bound, which provides the promising guidance for BFP based
CNN engine design.</description><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2017</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj8FSgzAYhHPx4FQfwJN5ATAQkpAjYqvOdKyH3pmfEGqmIemEUO3bi9TTXna_2Q-hh4ykRckYeYLwY85pJohMiRCC36JY--E0RYjGO7wOwQdcObCX0YzY9_jZenXEG-vngjvgT29cxFUw8WvQ0Si8C0a7qDtce3f2dlowH3oKYOeI3z4ccaWUtjpAnNkvejQHd4duerCjvv_PFdpv1vv6LdnuXt_rapsAFzzhOWS0UFlHpVJctkrLltCSd_N_IiUUtM-BlZqwgmVU5oyJvOy7XnEBLZtnK_R4xS7azSmYAcKl-dNvFn36C8rJVvw</recordid><startdate>20170922</startdate><enddate>20170922</enddate><creator>Song, Zhourui</creator><creator>Liu, Zhenyu</creator><creator>Wang, Dongsheng</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20170922</creationdate><title>Computation Error Analysis of Block Floating Point Arithmetic Oriented Convolution Neural Network Accelerator Design</title><author>Song, Zhourui ; Liu, Zhenyu ; Wang, Dongsheng</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a676-62a134c1d39cc69bce9b0386d170099a43f2a58e0545139255728fdfc67ab51d3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2017</creationdate><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Song, Zhourui</creatorcontrib><creatorcontrib>Liu, Zhenyu</creatorcontrib><creatorcontrib>Wang, Dongsheng</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Song, Zhourui</au><au>Liu, Zhenyu</au><au>Wang, Dongsheng</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Computation Error Analysis of Block Floating Point Arithmetic Oriented Convolution Neural Network Accelerator Design</atitle><date>2017-09-22</date><risdate>2017</risdate><abstract>The heavy burdens of computation and off-chip traffic impede deploying the
large scale convolution neural network on embedded platforms. As CNN is
attributed to the strong endurance to computation errors, employing block
floating point (BFP) arithmetics in CNN accelerators could save the hardware
cost and data traffics efficiently, while maintaining the classification
accuracy. In this paper, we verify the effects of word width definitions in BFP
to the CNN performance without retraining. Several typical CNN models,
including VGG16, ResNet-18, ResNet-50 and GoogLeNet, were tested in this paper.
Experiments revealed that 8-bit mantissa, including sign bit, in BFP
representation merely induced less than 0.3% accuracy loss. In addition, we
investigate the computational errors in theory and develop the noise-to-signal
ratio (NSR) upper bound, which provides the promising guidance for BFP based
CNN engine design.</abstract><doi>10.48550/arxiv.1709.07776</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.1709.07776 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_1709_07776 |
source | arXiv.org |
subjects | Computer Science - Learning |
title | Computation Error Analysis of Block Floating Point Arithmetic Oriented Convolution Neural Network Accelerator Design |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-02T06%3A41%3A27IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Computation%20Error%20Analysis%20of%20Block%20Floating%20Point%20Arithmetic%20Oriented%20Convolution%20Neural%20Network%20Accelerator%20Design&rft.au=Song,%20Zhourui&rft.date=2017-09-22&rft_id=info:doi/10.48550/arxiv.1709.07776&rft_dat=%3Carxiv_GOX%3E1709_07776%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |