DWSR: an architecture optimization framework for adaptive super-resolution neural networks based on meta-heuristics
Despite recent advancements in super-resolution neural network optimization, a fundamental challenge remains unresolved: as the number of parameters is reduced, the network’s performance significantly deteriorates. This paper presents a novel framework called the Depthwise Separable Convolution Supe...
Gespeichert in:
Veröffentlicht in: | The Artificial intelligence review 2024-01, Vol.57 (2), p.23, Article 23 |
---|---|
Hauptverfasser: | , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | 2 |
container_start_page | 23 |
container_title | The Artificial intelligence review |
container_volume | 57 |
creator | Chu, Shu-Chuan Dou, Zhi-Chao Pan, Jeng-Shyang Kong, Lingping Snášel, Václav Watada, Junzo |
description | Despite recent advancements in super-resolution neural network optimization, a fundamental challenge remains unresolved: as the number of parameters is reduced, the network’s performance significantly deteriorates. This paper presents a novel framework called the Depthwise Separable Convolution Super-Resolution Neural Network Framework (DWSR) for optimizing super-resolution neural network architectures. The depthwise separable convolutions are introduced to reduce the number of parameters and minimize the impact on the performance of the super-resolution neural network. The proposed framework uses the RUNge Kutta optimizer (RUN) variant (MoBRUN) as the search method. MoBRUN is a multi-objective binary version of RUN, which balances multiple objectives when optimizing the neural network architecture. Experimental results on publicly available datasets indicate that the DWSR framework can reduce the number of parameters of the Residual Dense Network (RDN) model by 22.17% while suffering only a minor decrease of 0.018 in Peak Signal-to-Noise Ratio (PSNR), the framework can reduce the number of parameters of the Enhanced SRGAN (ESRGAN) model by 31.45% while losing only 0.08 PSNR. Additionally, the framework can reduce the number of parameters of the HAT model by 5.38% while losing only 0.02 PSNR. |
doi_str_mv | 10.1007/s10462-023-10648-4 |
format | Article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2921236791</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2921236791</sourcerecordid><originalsourceid>FETCH-LOGICAL-c314t-c494e09b433dfdaf8c028c1bf86e1598038b69a9b55bc22834e417c4977740c83</originalsourceid><addsrcrecordid>eNp9kEtLxDAUhYsoOI7-AVcB19G82qTuxDcMCD5wGdL01uk4bWqSKvrrzcwIunJ1L_d851w4WXZIyTElRJ4ESkTBMGEcU1IIhcVWNqG55Fim-_affTfbC2FBCMmZ4JMsXDw_3J8i0yPj7byNYOPoAbkhtl37ZWLretR408GH86-ocR6Z2iTxHVAYB_DYQ3DLcc31MHqzTCOu4IAqE6BGSeggGjxPahtia8N-ttOYZYCDnznNnq4uH89v8Ozu-vb8bIYtpyJiK0oBpKwE53VTm0ZZwpSlVaMKoHmpCFdVUZqyyvPKMqa4AEFlckkpBbGKT7OjTe7g3dsIIeqFG32fXmpWMsp4IUuaKLahrHcheGj04NvO-E9NiV6Vqzfl6lSuXperRTLxjSkkuH8B_xv9j-sbXDt-iA</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2921236791</pqid></control><display><type>article</type><title>DWSR: an architecture optimization framework for adaptive super-resolution neural networks based on meta-heuristics</title><source>Springer Nature OA Free Journals</source><source>SpringerLink Journals - AutoHoldings</source><creator>Chu, Shu-Chuan ; Dou, Zhi-Chao ; Pan, Jeng-Shyang ; Kong, Lingping ; Snášel, Václav ; Watada, Junzo</creator><creatorcontrib>Chu, Shu-Chuan ; Dou, Zhi-Chao ; Pan, Jeng-Shyang ; Kong, Lingping ; Snášel, Václav ; Watada, Junzo</creatorcontrib><description>Despite recent advancements in super-resolution neural network optimization, a fundamental challenge remains unresolved: as the number of parameters is reduced, the network’s performance significantly deteriorates. This paper presents a novel framework called the Depthwise Separable Convolution Super-Resolution Neural Network Framework (DWSR) for optimizing super-resolution neural network architectures. The depthwise separable convolutions are introduced to reduce the number of parameters and minimize the impact on the performance of the super-resolution neural network. The proposed framework uses the RUNge Kutta optimizer (RUN) variant (MoBRUN) as the search method. MoBRUN is a multi-objective binary version of RUN, which balances multiple objectives when optimizing the neural network architecture. Experimental results on publicly available datasets indicate that the DWSR framework can reduce the number of parameters of the Residual Dense Network (RDN) model by 22.17% while suffering only a minor decrease of 0.018 in Peak Signal-to-Noise Ratio (PSNR), the framework can reduce the number of parameters of the Enhanced SRGAN (ESRGAN) model by 31.45% while losing only 0.08 PSNR. Additionally, the framework can reduce the number of parameters of the HAT model by 5.38% while losing only 0.02 PSNR.</description><identifier>ISSN: 1573-7462</identifier><identifier>ISSN: 0269-2821</identifier><identifier>EISSN: 1573-7462</identifier><identifier>DOI: 10.1007/s10462-023-10648-4</identifier><language>eng</language><publisher>Dordrecht: Springer Netherlands</publisher><subject>Artificial Intelligence ; Computer Science ; Heuristic ; Mathematical models ; Network management systems ; Neural networks ; Optimization ; Parameters ; Runge-Kutta method ; Signal to noise ratio</subject><ispartof>The Artificial intelligence review, 2024-01, Vol.57 (2), p.23, Article 23</ispartof><rights>The Author(s) 2024</rights><rights>The Author(s) 2024. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c314t-c494e09b433dfdaf8c028c1bf86e1598038b69a9b55bc22834e417c4977740c83</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s10462-023-10648-4$$EPDF$$P50$$Gspringer$$Hfree_for_read</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1007/s10462-023-10648-4$$EHTML$$P50$$Gspringer$$Hfree_for_read</linktohtml><link.rule.ids>314,780,784,27922,27923,41118,41486,42187,42555,51317,51574</link.rule.ids></links><search><creatorcontrib>Chu, Shu-Chuan</creatorcontrib><creatorcontrib>Dou, Zhi-Chao</creatorcontrib><creatorcontrib>Pan, Jeng-Shyang</creatorcontrib><creatorcontrib>Kong, Lingping</creatorcontrib><creatorcontrib>Snášel, Václav</creatorcontrib><creatorcontrib>Watada, Junzo</creatorcontrib><title>DWSR: an architecture optimization framework for adaptive super-resolution neural networks based on meta-heuristics</title><title>The Artificial intelligence review</title><addtitle>Artif Intell Rev</addtitle><description>Despite recent advancements in super-resolution neural network optimization, a fundamental challenge remains unresolved: as the number of parameters is reduced, the network’s performance significantly deteriorates. This paper presents a novel framework called the Depthwise Separable Convolution Super-Resolution Neural Network Framework (DWSR) for optimizing super-resolution neural network architectures. The depthwise separable convolutions are introduced to reduce the number of parameters and minimize the impact on the performance of the super-resolution neural network. The proposed framework uses the RUNge Kutta optimizer (RUN) variant (MoBRUN) as the search method. MoBRUN is a multi-objective binary version of RUN, which balances multiple objectives when optimizing the neural network architecture. Experimental results on publicly available datasets indicate that the DWSR framework can reduce the number of parameters of the Residual Dense Network (RDN) model by 22.17% while suffering only a minor decrease of 0.018 in Peak Signal-to-Noise Ratio (PSNR), the framework can reduce the number of parameters of the Enhanced SRGAN (ESRGAN) model by 31.45% while losing only 0.08 PSNR. Additionally, the framework can reduce the number of parameters of the HAT model by 5.38% while losing only 0.02 PSNR.</description><subject>Artificial Intelligence</subject><subject>Computer Science</subject><subject>Heuristic</subject><subject>Mathematical models</subject><subject>Network management systems</subject><subject>Neural networks</subject><subject>Optimization</subject><subject>Parameters</subject><subject>Runge-Kutta method</subject><subject>Signal to noise ratio</subject><issn>1573-7462</issn><issn>0269-2821</issn><issn>1573-7462</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>C6C</sourceid><recordid>eNp9kEtLxDAUhYsoOI7-AVcB19G82qTuxDcMCD5wGdL01uk4bWqSKvrrzcwIunJ1L_d851w4WXZIyTElRJ4ESkTBMGEcU1IIhcVWNqG55Fim-_affTfbC2FBCMmZ4JMsXDw_3J8i0yPj7byNYOPoAbkhtl37ZWLretR408GH86-ocR6Z2iTxHVAYB_DYQ3DLcc31MHqzTCOu4IAqE6BGSeggGjxPahtia8N-ttOYZYCDnznNnq4uH89v8Ozu-vb8bIYtpyJiK0oBpKwE53VTm0ZZwpSlVaMKoHmpCFdVUZqyyvPKMqa4AEFlckkpBbGKT7OjTe7g3dsIIeqFG32fXmpWMsp4IUuaKLahrHcheGj04NvO-E9NiV6Vqzfl6lSuXperRTLxjSkkuH8B_xv9j-sbXDt-iA</recordid><startdate>20240130</startdate><enddate>20240130</enddate><creator>Chu, Shu-Chuan</creator><creator>Dou, Zhi-Chao</creator><creator>Pan, Jeng-Shyang</creator><creator>Kong, Lingping</creator><creator>Snášel, Václav</creator><creator>Watada, Junzo</creator><general>Springer Netherlands</general><general>Springer Nature B.V</general><scope>C6C</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>8FD</scope><scope>E3H</scope><scope>F2A</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope></search><sort><creationdate>20240130</creationdate><title>DWSR: an architecture optimization framework for adaptive super-resolution neural networks based on meta-heuristics</title><author>Chu, Shu-Chuan ; Dou, Zhi-Chao ; Pan, Jeng-Shyang ; Kong, Lingping ; Snášel, Václav ; Watada, Junzo</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c314t-c494e09b433dfdaf8c028c1bf86e1598038b69a9b55bc22834e417c4977740c83</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Artificial Intelligence</topic><topic>Computer Science</topic><topic>Heuristic</topic><topic>Mathematical models</topic><topic>Network management systems</topic><topic>Neural networks</topic><topic>Optimization</topic><topic>Parameters</topic><topic>Runge-Kutta method</topic><topic>Signal to noise ratio</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Chu, Shu-Chuan</creatorcontrib><creatorcontrib>Dou, Zhi-Chao</creatorcontrib><creatorcontrib>Pan, Jeng-Shyang</creatorcontrib><creatorcontrib>Kong, Lingping</creatorcontrib><creatorcontrib>Snášel, Václav</creatorcontrib><creatorcontrib>Watada, Junzo</creatorcontrib><collection>Springer Nature OA Free Journals</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Technology Research Database</collection><collection>Library & Information Sciences Abstracts (LISA)</collection><collection>Library & Information Science Abstracts (LISA)</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>The Artificial intelligence review</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Chu, Shu-Chuan</au><au>Dou, Zhi-Chao</au><au>Pan, Jeng-Shyang</au><au>Kong, Lingping</au><au>Snášel, Václav</au><au>Watada, Junzo</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>DWSR: an architecture optimization framework for adaptive super-resolution neural networks based on meta-heuristics</atitle><jtitle>The Artificial intelligence review</jtitle><stitle>Artif Intell Rev</stitle><date>2024-01-30</date><risdate>2024</risdate><volume>57</volume><issue>2</issue><spage>23</spage><pages>23-</pages><artnum>23</artnum><issn>1573-7462</issn><issn>0269-2821</issn><eissn>1573-7462</eissn><abstract>Despite recent advancements in super-resolution neural network optimization, a fundamental challenge remains unresolved: as the number of parameters is reduced, the network’s performance significantly deteriorates. This paper presents a novel framework called the Depthwise Separable Convolution Super-Resolution Neural Network Framework (DWSR) for optimizing super-resolution neural network architectures. The depthwise separable convolutions are introduced to reduce the number of parameters and minimize the impact on the performance of the super-resolution neural network. The proposed framework uses the RUNge Kutta optimizer (RUN) variant (MoBRUN) as the search method. MoBRUN is a multi-objective binary version of RUN, which balances multiple objectives when optimizing the neural network architecture. Experimental results on publicly available datasets indicate that the DWSR framework can reduce the number of parameters of the Residual Dense Network (RDN) model by 22.17% while suffering only a minor decrease of 0.018 in Peak Signal-to-Noise Ratio (PSNR), the framework can reduce the number of parameters of the Enhanced SRGAN (ESRGAN) model by 31.45% while losing only 0.08 PSNR. Additionally, the framework can reduce the number of parameters of the HAT model by 5.38% while losing only 0.02 PSNR.</abstract><cop>Dordrecht</cop><pub>Springer Netherlands</pub><doi>10.1007/s10462-023-10648-4</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 1573-7462 |
ispartof | The Artificial intelligence review, 2024-01, Vol.57 (2), p.23, Article 23 |
issn | 1573-7462 0269-2821 1573-7462 |
language | eng |
recordid | cdi_proquest_journals_2921236791 |
source | Springer Nature OA Free Journals; SpringerLink Journals - AutoHoldings |
subjects | Artificial Intelligence Computer Science Heuristic Mathematical models Network management systems Neural networks Optimization Parameters Runge-Kutta method Signal to noise ratio |
title | DWSR: an architecture optimization framework for adaptive super-resolution neural networks based on meta-heuristics |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-14T11%3A27%3A48IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=DWSR:%20an%20architecture%20optimization%20framework%20for%20adaptive%20super-resolution%20neural%20networks%20based%20on%20meta-heuristics&rft.jtitle=The%20Artificial%20intelligence%20review&rft.au=Chu,%20Shu-Chuan&rft.date=2024-01-30&rft.volume=57&rft.issue=2&rft.spage=23&rft.pages=23-&rft.artnum=23&rft.issn=1573-7462&rft.eissn=1573-7462&rft_id=info:doi/10.1007/s10462-023-10648-4&rft_dat=%3Cproquest_cross%3E2921236791%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2921236791&rft_id=info:pmid/&rfr_iscdi=true |