Convolution neural network with low operation FLOPS and high accuracy for image recognition
The convolution neural network makes deeper and wider for better accuracy, but requires higher computations. When the neural network goes deeper, some information loss is more. To improve this drawback, the residual structure was developed to connect the information of the previous layers. This is a...
Gespeichert in:
Veröffentlicht in: | Journal of real-time image processing 2021-08, Vol.18 (4), p.1309-1319 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 1319 |
---|---|
container_issue | 4 |
container_start_page | 1309 |
container_title | Journal of real-time image processing |
container_volume | 18 |
creator | Hsia, Shih-Chang Wang, Szu-Hong Chang, Chuan-Yu |
description | The convolution neural network makes deeper and wider for better accuracy, but requires higher computations. When the neural network goes deeper, some information loss is more. To improve this drawback, the residual structure was developed to connect the information of the previous layers. This is a good solution to prevent the loss of information, but it requires a huge amount of parameters for deeper layer operations. In this study, the fast computational algorithm is proposed to reduce the parameters and to save the operations with the modification of DenseNet deep layer block. With channel merging procedures, this solution can reduce the dilemma of multiple growth of the parameter quantity for deeper layer. This approach is not only to reduce the parameters and FLOPs, but also to keep high accuracy. Comparisons with the original DenseNet and RetNet-110, the parameters can be efficiency reduced about 30–70%, while the accuracy degrades little. The lightweight network can be implemented on a low-cost embedded system for real-time application. |
doi_str_mv | 10.1007/s11554-021-01140-9 |
format | Article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2918675839</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2918675839</sourcerecordid><originalsourceid>FETCH-LOGICAL-c368t-2aebfdef7ef3cf4ae9af3bf300a86239244de757b3dba99b8526adf7b3e898583</originalsourceid><addsrcrecordid>eNp9kE1LAzEQhoMoWKt_wFPAczQf-5EcpVgVChXUk4eQzSbbrWtSk11L_71pV_TmaWbged-ZeQG4JPiaYFzeRELyPEOYEoQJyTASR2BCeEEQp0Qc__YYn4KzGNcYF2XB8gl4m3n35buhb72DzgxBdan0Wx_e4bbtV7DzW-g3JqgDMV8sn56hcjVctc0KKq2TQu-g9QG2H6oxMBjtG9fu6XNwYlUXzcVPnYLX-d3L7AEtlvePs9sF0qzgPaLKVLY2tjSWaZspI5RllWUYK15QJmiW1abMy4rVlRKi4jktVG3TbLjgOWdTcDX6boL_HEzs5doPwaWVkor0eJkYkSg6Ujr4GIOxchPSyWEnCZb7EOUYokwhykOIci9ioygm2DUm_Fn_o_oGFA52Sw</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2918675839</pqid></control><display><type>article</type><title>Convolution neural network with low operation FLOPS and high accuracy for image recognition</title><source>ProQuest Central UK/Ireland</source><source>SpringerLink Journals - AutoHoldings</source><source>ProQuest Central</source><creator>Hsia, Shih-Chang ; Wang, Szu-Hong ; Chang, Chuan-Yu</creator><creatorcontrib>Hsia, Shih-Chang ; Wang, Szu-Hong ; Chang, Chuan-Yu</creatorcontrib><description>The convolution neural network makes deeper and wider for better accuracy, but requires higher computations. When the neural network goes deeper, some information loss is more. To improve this drawback, the residual structure was developed to connect the information of the previous layers. This is a good solution to prevent the loss of information, but it requires a huge amount of parameters for deeper layer operations. In this study, the fast computational algorithm is proposed to reduce the parameters and to save the operations with the modification of DenseNet deep layer block. With channel merging procedures, this solution can reduce the dilemma of multiple growth of the parameter quantity for deeper layer. This approach is not only to reduce the parameters and FLOPs, but also to keep high accuracy. Comparisons with the original DenseNet and RetNet-110, the parameters can be efficiency reduced about 30–70%, while the accuracy degrades little. The lightweight network can be implemented on a low-cost embedded system for real-time application.</description><identifier>ISSN: 1861-8200</identifier><identifier>EISSN: 1861-8219</identifier><identifier>DOI: 10.1007/s11554-021-01140-9</identifier><language>eng</language><publisher>Berlin/Heidelberg: Springer Berlin Heidelberg</publisher><subject>Accuracy ; Algorithms ; Artificial intelligence ; Artificial neural networks ; Computer Graphics ; Computer Science ; Data compression ; Electron microscopes ; Field programmable gate arrays ; Image Processing and Computer Vision ; Multimedia Information Systems ; Neural networks ; Parameter modification ; Pattern Recognition ; Signal,Image and Speech Processing ; Special Issue Paper</subject><ispartof>Journal of real-time image processing, 2021-08, Vol.18 (4), p.1309-1319</ispartof><rights>The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature 2021</rights><rights>The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature 2021.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c368t-2aebfdef7ef3cf4ae9af3bf300a86239244de757b3dba99b8526adf7b3e898583</citedby><cites>FETCH-LOGICAL-c368t-2aebfdef7ef3cf4ae9af3bf300a86239244de757b3dba99b8526adf7b3e898583</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s11554-021-01140-9$$EPDF$$P50$$Gspringer$$H</linktopdf><linktohtml>$$Uhttps://www.proquest.com/docview/2918675839?pq-origsite=primo$$EHTML$$P50$$Gproquest$$H</linktohtml><link.rule.ids>314,780,784,21388,27924,27925,33744,41488,42557,43805,51319,64385,64389,72469</link.rule.ids></links><search><creatorcontrib>Hsia, Shih-Chang</creatorcontrib><creatorcontrib>Wang, Szu-Hong</creatorcontrib><creatorcontrib>Chang, Chuan-Yu</creatorcontrib><title>Convolution neural network with low operation FLOPS and high accuracy for image recognition</title><title>Journal of real-time image processing</title><addtitle>J Real-Time Image Proc</addtitle><description>The convolution neural network makes deeper and wider for better accuracy, but requires higher computations. When the neural network goes deeper, some information loss is more. To improve this drawback, the residual structure was developed to connect the information of the previous layers. This is a good solution to prevent the loss of information, but it requires a huge amount of parameters for deeper layer operations. In this study, the fast computational algorithm is proposed to reduce the parameters and to save the operations with the modification of DenseNet deep layer block. With channel merging procedures, this solution can reduce the dilemma of multiple growth of the parameter quantity for deeper layer. This approach is not only to reduce the parameters and FLOPs, but also to keep high accuracy. Comparisons with the original DenseNet and RetNet-110, the parameters can be efficiency reduced about 30–70%, while the accuracy degrades little. The lightweight network can be implemented on a low-cost embedded system for real-time application.</description><subject>Accuracy</subject><subject>Algorithms</subject><subject>Artificial intelligence</subject><subject>Artificial neural networks</subject><subject>Computer Graphics</subject><subject>Computer Science</subject><subject>Data compression</subject><subject>Electron microscopes</subject><subject>Field programmable gate arrays</subject><subject>Image Processing and Computer Vision</subject><subject>Multimedia Information Systems</subject><subject>Neural networks</subject><subject>Parameter modification</subject><subject>Pattern Recognition</subject><subject>Signal,Image and Speech Processing</subject><subject>Special Issue Paper</subject><issn>1861-8200</issn><issn>1861-8219</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GNUQQ</sourceid><recordid>eNp9kE1LAzEQhoMoWKt_wFPAczQf-5EcpVgVChXUk4eQzSbbrWtSk11L_71pV_TmaWbged-ZeQG4JPiaYFzeRELyPEOYEoQJyTASR2BCeEEQp0Qc__YYn4KzGNcYF2XB8gl4m3n35buhb72DzgxBdan0Wx_e4bbtV7DzW-g3JqgDMV8sn56hcjVctc0KKq2TQu-g9QG2H6oxMBjtG9fu6XNwYlUXzcVPnYLX-d3L7AEtlvePs9sF0qzgPaLKVLY2tjSWaZspI5RllWUYK15QJmiW1abMy4rVlRKi4jktVG3TbLjgOWdTcDX6boL_HEzs5doPwaWVkor0eJkYkSg6Ujr4GIOxchPSyWEnCZb7EOUYokwhykOIci9ioygm2DUm_Fn_o_oGFA52Sw</recordid><startdate>20210801</startdate><enddate>20210801</enddate><creator>Hsia, Shih-Chang</creator><creator>Wang, Szu-Hong</creator><creator>Chang, Chuan-Yu</creator><general>Springer Berlin Heidelberg</general><general>Springer Nature B.V</general><scope>AAYXX</scope><scope>CITATION</scope><scope>8FE</scope><scope>8FG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>GNUQQ</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K7-</scope><scope>P5Z</scope><scope>P62</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope></search><sort><creationdate>20210801</creationdate><title>Convolution neural network with low operation FLOPS and high accuracy for image recognition</title><author>Hsia, Shih-Chang ; Wang, Szu-Hong ; Chang, Chuan-Yu</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c368t-2aebfdef7ef3cf4ae9af3bf300a86239244de757b3dba99b8526adf7b3e898583</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Accuracy</topic><topic>Algorithms</topic><topic>Artificial intelligence</topic><topic>Artificial neural networks</topic><topic>Computer Graphics</topic><topic>Computer Science</topic><topic>Data compression</topic><topic>Electron microscopes</topic><topic>Field programmable gate arrays</topic><topic>Image Processing and Computer Vision</topic><topic>Multimedia Information Systems</topic><topic>Neural networks</topic><topic>Parameter modification</topic><topic>Pattern Recognition</topic><topic>Signal,Image and Speech Processing</topic><topic>Special Issue Paper</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Hsia, Shih-Chang</creatorcontrib><creatorcontrib>Wang, Szu-Hong</creatorcontrib><creatorcontrib>Chang, Chuan-Yu</creatorcontrib><collection>CrossRef</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies & Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>ProQuest Central Student</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Computer Science Collection</collection><collection>Computer Science Database</collection><collection>Advanced Technologies & Aerospace Database</collection><collection>ProQuest Advanced Technologies & Aerospace Collection</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><jtitle>Journal of real-time image processing</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Hsia, Shih-Chang</au><au>Wang, Szu-Hong</au><au>Chang, Chuan-Yu</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Convolution neural network with low operation FLOPS and high accuracy for image recognition</atitle><jtitle>Journal of real-time image processing</jtitle><stitle>J Real-Time Image Proc</stitle><date>2021-08-01</date><risdate>2021</risdate><volume>18</volume><issue>4</issue><spage>1309</spage><epage>1319</epage><pages>1309-1319</pages><issn>1861-8200</issn><eissn>1861-8219</eissn><abstract>The convolution neural network makes deeper and wider for better accuracy, but requires higher computations. When the neural network goes deeper, some information loss is more. To improve this drawback, the residual structure was developed to connect the information of the previous layers. This is a good solution to prevent the loss of information, but it requires a huge amount of parameters for deeper layer operations. In this study, the fast computational algorithm is proposed to reduce the parameters and to save the operations with the modification of DenseNet deep layer block. With channel merging procedures, this solution can reduce the dilemma of multiple growth of the parameter quantity for deeper layer. This approach is not only to reduce the parameters and FLOPs, but also to keep high accuracy. Comparisons with the original DenseNet and RetNet-110, the parameters can be efficiency reduced about 30–70%, while the accuracy degrades little. The lightweight network can be implemented on a low-cost embedded system for real-time application.</abstract><cop>Berlin/Heidelberg</cop><pub>Springer Berlin Heidelberg</pub><doi>10.1007/s11554-021-01140-9</doi><tpages>11</tpages></addata></record> |
fulltext | fulltext |
identifier | ISSN: 1861-8200 |
ispartof | Journal of real-time image processing, 2021-08, Vol.18 (4), p.1309-1319 |
issn | 1861-8200 1861-8219 |
language | eng |
recordid | cdi_proquest_journals_2918675839 |
source | ProQuest Central UK/Ireland; SpringerLink Journals - AutoHoldings; ProQuest Central |
subjects | Accuracy Algorithms Artificial intelligence Artificial neural networks Computer Graphics Computer Science Data compression Electron microscopes Field programmable gate arrays Image Processing and Computer Vision Multimedia Information Systems Neural networks Parameter modification Pattern Recognition Signal,Image and Speech Processing Special Issue Paper |
title | Convolution neural network with low operation FLOPS and high accuracy for image recognition |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-29T03%3A11%3A22IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Convolution%20neural%20network%20with%20low%20operation%20FLOPS%20and%20high%20accuracy%20for%20image%20recognition&rft.jtitle=Journal%20of%20real-time%20image%20processing&rft.au=Hsia,%20Shih-Chang&rft.date=2021-08-01&rft.volume=18&rft.issue=4&rft.spage=1309&rft.epage=1319&rft.pages=1309-1319&rft.issn=1861-8200&rft.eissn=1861-8219&rft_id=info:doi/10.1007/s11554-021-01140-9&rft_dat=%3Cproquest_cross%3E2918675839%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2918675839&rft_id=info:pmid/&rfr_iscdi=true |