Image Denoising Using Sparsifying Transform Learning and Weighted Singular Values Minimization
In image denoising (IDN) processing, the low-rank property is usually considered as an important image prior. As a convex relaxation approximation of low rank, nuclear norm-based algorithms and their variants have attracted a significant attention. These algorithms can be collectively called image d...
Gespeichert in:
Veröffentlicht in: | Computational intelligence and neuroscience 2020, Vol.2020 (2020), p.1-12 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 12 |
---|---|
container_issue | 2020 |
container_start_page | 1 |
container_title | Computational intelligence and neuroscience |
container_volume | 2020 |
creator | Zheng, Jianwei Guan, Qiu Yang, Ping Zhao, Yanwei Wang, Wan Liang |
description | In image denoising (IDN) processing, the low-rank property is usually considered as an important image prior. As a convex relaxation approximation of low rank, nuclear norm-based algorithms and their variants have attracted a significant attention. These algorithms can be collectively called image domain-based methods whose common drawback is the requirement of great number of iterations for some acceptable solution. Meanwhile, the sparsity of images in a certain transform domain has also been exploited in image denoising problems. Sparsity transform learning algorithms can achieve extremely fast computations as well as desirable performance. By taking both advantages of image domain and transform domain in a general framework, we propose a sparsifying transform learning and weighted singular values minimization method (STLWSM) for IDN problems. The proposed method can make full use of the preponderance of both domains. For solving the nonconvex cost function, we also present an efficient alternative solution for acceleration. Experimental results show that the proposed STLWSM achieves improvement both visually and quantitatively with a large margin over state-of-the-art approaches based on an alternatively single domain. It also needs much less iteration than all the image domain algorithms. |
doi_str_mv | 10.1155/2020/8392032 |
format | Article |
fullrecord | <record><control><sourceid>gale_pubme</sourceid><recordid>TN_cdi_pubmedcentral_primary_oai_pubmedcentral_nih_gov_7439773</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><galeid>A639994232</galeid><sourcerecordid>A639994232</sourcerecordid><originalsourceid>FETCH-LOGICAL-c433t-c56efbce915a65ba336b10417205db7fd03bf299b2404eb4d93c3ac5a6bd71ba3</originalsourceid><addsrcrecordid>eNqF0c9rFDEUB_Agiq3Vm2cZ8CLo2vycTC6FUn8VVjy01ZvhZSaZTZlJ1mRGqX-9me7SohcvySP58E0eD6HnBL8lRIhjiik-bpiimNEH6JDUjVwJKtnDu7oWB-hJztcYCykwfYwOGG24ampxiL6fj9Db6p0N0Wcf-urqdr3YQsre3Sz1ZYKQXUxjtbaQwnIEoau-Wd9vJttVF-VkHiBVX2GYba4---BH_xsmH8NT9MjBkO2z_X6Erj68vzz7tFp_-Xh-drpetZyxadWK2jrTWkUE1MIAY7UhmBNJseiMdB1mxlGlDOWYW8M7xVoGbcGmk6T4I3Syy93OZrRda8OUYNDb5EdINzqC13_fBL_RffypJWdKSlYCXu0DUvxRupj06HNrhwGCjXPWlDPZcC4ZLfTlP_Q6zimU9hZVHBG4uVc9DFb74GJ5t11C9WnNlFKc3ma92ak2xZyTdXdfJlgv49XLePV-vIW_3vGNDx388v_TL3baFmMd3GvCmqb89A_iH60z</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2434371508</pqid></control><display><type>article</type><title>Image Denoising Using Sparsifying Transform Learning and Weighted Singular Values Minimization</title><source>EZB-FREE-00999 freely available EZB journals</source><source>Wiley Online Library (Open Access Collection)</source><source>PubMed Central</source><source>Alma/SFX Local Collection</source><source>PubMed Central Open Access</source><creator>Zheng, Jianwei ; Guan, Qiu ; Yang, Ping ; Zhao, Yanwei ; Wang, Wan Liang</creator><contributor>Lo Bosco, Giosuè ; Giosuè Lo Bosco</contributor><creatorcontrib>Zheng, Jianwei ; Guan, Qiu ; Yang, Ping ; Zhao, Yanwei ; Wang, Wan Liang ; Lo Bosco, Giosuè ; Giosuè Lo Bosco</creatorcontrib><description>In image denoising (IDN) processing, the low-rank property is usually considered as an important image prior. As a convex relaxation approximation of low rank, nuclear norm-based algorithms and their variants have attracted a significant attention. These algorithms can be collectively called image domain-based methods whose common drawback is the requirement of great number of iterations for some acceptable solution. Meanwhile, the sparsity of images in a certain transform domain has also been exploited in image denoising problems. Sparsity transform learning algorithms can achieve extremely fast computations as well as desirable performance. By taking both advantages of image domain and transform domain in a general framework, we propose a sparsifying transform learning and weighted singular values minimization method (STLWSM) for IDN problems. The proposed method can make full use of the preponderance of both domains. For solving the nonconvex cost function, we also present an efficient alternative solution for acceleration. Experimental results show that the proposed STLWSM achieves improvement both visually and quantitatively with a large margin over state-of-the-art approaches based on an alternatively single domain. It also needs much less iteration than all the image domain algorithms.</description><identifier>ISSN: 1687-5265</identifier><identifier>EISSN: 1687-5273</identifier><identifier>DOI: 10.1155/2020/8392032</identifier><identifier>PMID: 32849865</identifier><language>eng</language><publisher>Cairo, Egypt: Hindawi Publishing Corporation</publisher><subject>Algorithms ; Cost function ; Data mining ; Dictionaries ; Domains ; Iterative methods ; Learning ; Learning algorithms ; Machine learning ; Noise ; Noise reduction ; Optimization ; Researchers ; Sparsity ; Success</subject><ispartof>Computational intelligence and neuroscience, 2020, Vol.2020 (2020), p.1-12</ispartof><rights>Copyright © 2020 Yanwei Zhao et al.</rights><rights>COPYRIGHT 2020 John Wiley & Sons, Inc.</rights><rights>Copyright © 2020 Yanwei Zhao et al. This is an open access article distributed under the Creative Commons Attribution License (the “License”), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. http://creativecommons.org/licenses/by/4.0</rights><rights>Copyright © 2020 Yanwei Zhao et al. 2020</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c433t-c56efbce915a65ba336b10417205db7fd03bf299b2404eb4d93c3ac5a6bd71ba3</cites><orcidid>0000-0002-1552-5075 ; 0000-0002-1896-7135</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://www.ncbi.nlm.nih.gov/pmc/articles/PMC7439773/pdf/$$EPDF$$P50$$Gpubmedcentral$$Hfree_for_read</linktopdf><linktohtml>$$Uhttps://www.ncbi.nlm.nih.gov/pmc/articles/PMC7439773/$$EHTML$$P50$$Gpubmedcentral$$Hfree_for_read</linktohtml><link.rule.ids>230,314,727,780,784,885,4024,27923,27924,27925,53791,53793</link.rule.ids></links><search><contributor>Lo Bosco, Giosuè</contributor><contributor>Giosuè Lo Bosco</contributor><creatorcontrib>Zheng, Jianwei</creatorcontrib><creatorcontrib>Guan, Qiu</creatorcontrib><creatorcontrib>Yang, Ping</creatorcontrib><creatorcontrib>Zhao, Yanwei</creatorcontrib><creatorcontrib>Wang, Wan Liang</creatorcontrib><title>Image Denoising Using Sparsifying Transform Learning and Weighted Singular Values Minimization</title><title>Computational intelligence and neuroscience</title><description>In image denoising (IDN) processing, the low-rank property is usually considered as an important image prior. As a convex relaxation approximation of low rank, nuclear norm-based algorithms and their variants have attracted a significant attention. These algorithms can be collectively called image domain-based methods whose common drawback is the requirement of great number of iterations for some acceptable solution. Meanwhile, the sparsity of images in a certain transform domain has also been exploited in image denoising problems. Sparsity transform learning algorithms can achieve extremely fast computations as well as desirable performance. By taking both advantages of image domain and transform domain in a general framework, we propose a sparsifying transform learning and weighted singular values minimization method (STLWSM) for IDN problems. The proposed method can make full use of the preponderance of both domains. For solving the nonconvex cost function, we also present an efficient alternative solution for acceleration. Experimental results show that the proposed STLWSM achieves improvement both visually and quantitatively with a large margin over state-of-the-art approaches based on an alternatively single domain. It also needs much less iteration than all the image domain algorithms.</description><subject>Algorithms</subject><subject>Cost function</subject><subject>Data mining</subject><subject>Dictionaries</subject><subject>Domains</subject><subject>Iterative methods</subject><subject>Learning</subject><subject>Learning algorithms</subject><subject>Machine learning</subject><subject>Noise</subject><subject>Noise reduction</subject><subject>Optimization</subject><subject>Researchers</subject><subject>Sparsity</subject><subject>Success</subject><issn>1687-5265</issn><issn>1687-5273</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>RHX</sourceid><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GNUQQ</sourceid><recordid>eNqF0c9rFDEUB_Agiq3Vm2cZ8CLo2vycTC6FUn8VVjy01ZvhZSaZTZlJ1mRGqX-9me7SohcvySP58E0eD6HnBL8lRIhjiik-bpiimNEH6JDUjVwJKtnDu7oWB-hJztcYCykwfYwOGG24ampxiL6fj9Db6p0N0Wcf-urqdr3YQsre3Sz1ZYKQXUxjtbaQwnIEoau-Wd9vJttVF-VkHiBVX2GYba4---BH_xsmH8NT9MjBkO2z_X6Erj68vzz7tFp_-Xh-drpetZyxadWK2jrTWkUE1MIAY7UhmBNJseiMdB1mxlGlDOWYW8M7xVoGbcGmk6T4I3Syy93OZrRda8OUYNDb5EdINzqC13_fBL_RffypJWdKSlYCXu0DUvxRupj06HNrhwGCjXPWlDPZcC4ZLfTlP_Q6zimU9hZVHBG4uVc9DFb74GJ5t11C9WnNlFKc3ma92ak2xZyTdXdfJlgv49XLePV-vIW_3vGNDx388v_TL3baFmMd3GvCmqb89A_iH60z</recordid><startdate>2020</startdate><enddate>2020</enddate><creator>Zheng, Jianwei</creator><creator>Guan, Qiu</creator><creator>Yang, Ping</creator><creator>Zhao, Yanwei</creator><creator>Wang, Wan Liang</creator><general>Hindawi Publishing Corporation</general><general>Hindawi</general><general>John Wiley & Sons, Inc</general><general>Hindawi Limited</general><scope>ADJCN</scope><scope>AHFXO</scope><scope>RHU</scope><scope>RHW</scope><scope>RHX</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>3V.</scope><scope>7QF</scope><scope>7QQ</scope><scope>7SC</scope><scope>7SE</scope><scope>7SP</scope><scope>7SR</scope><scope>7TA</scope><scope>7TB</scope><scope>7TK</scope><scope>7U5</scope><scope>7X7</scope><scope>7XB</scope><scope>8AL</scope><scope>8BQ</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>8FH</scope><scope>8FI</scope><scope>8FJ</scope><scope>8FK</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BBNVY</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>BHPHI</scope><scope>CCPQU</scope><scope>CWDGH</scope><scope>DWQXO</scope><scope>F28</scope><scope>FR3</scope><scope>FYUFA</scope><scope>GHDGH</scope><scope>GNUQQ</scope><scope>H8D</scope><scope>H8G</scope><scope>HCIFZ</scope><scope>JG9</scope><scope>JQ2</scope><scope>K7-</scope><scope>K9.</scope><scope>KR7</scope><scope>L6V</scope><scope>L7M</scope><scope>LK8</scope><scope>L~C</scope><scope>L~D</scope><scope>M0N</scope><scope>M0S</scope><scope>M1P</scope><scope>M7P</scope><scope>M7S</scope><scope>P5Z</scope><scope>P62</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PSYQQ</scope><scope>PTHSS</scope><scope>Q9U</scope><scope>7X8</scope><scope>5PM</scope><orcidid>https://orcid.org/0000-0002-1552-5075</orcidid><orcidid>https://orcid.org/0000-0002-1896-7135</orcidid></search><sort><creationdate>2020</creationdate><title>Image Denoising Using Sparsifying Transform Learning and Weighted Singular Values Minimization</title><author>Zheng, Jianwei ; Guan, Qiu ; Yang, Ping ; Zhao, Yanwei ; Wang, Wan Liang</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c433t-c56efbce915a65ba336b10417205db7fd03bf299b2404eb4d93c3ac5a6bd71ba3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Algorithms</topic><topic>Cost function</topic><topic>Data mining</topic><topic>Dictionaries</topic><topic>Domains</topic><topic>Iterative methods</topic><topic>Learning</topic><topic>Learning algorithms</topic><topic>Machine learning</topic><topic>Noise</topic><topic>Noise reduction</topic><topic>Optimization</topic><topic>Researchers</topic><topic>Sparsity</topic><topic>Success</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Zheng, Jianwei</creatorcontrib><creatorcontrib>Guan, Qiu</creatorcontrib><creatorcontrib>Yang, Ping</creatorcontrib><creatorcontrib>Zhao, Yanwei</creatorcontrib><creatorcontrib>Wang, Wan Liang</creatorcontrib><collection>الدوريات العلمية والإحصائية - e-Marefa Academic and Statistical Periodicals</collection><collection>معرفة - المحتوى العربي الأكاديمي المتكامل - e-Marefa Academic Complete</collection><collection>Hindawi Publishing Complete</collection><collection>Hindawi Publishing Subscription Journals</collection><collection>Hindawi Publishing Open Access Journals</collection><collection>CrossRef</collection><collection>ProQuest Central (Corporate)</collection><collection>Aluminium Industry Abstracts</collection><collection>Ceramic Abstracts</collection><collection>Computer and Information Systems Abstracts</collection><collection>Corrosion Abstracts</collection><collection>Electronics & Communications Abstracts</collection><collection>Engineered Materials Abstracts</collection><collection>Materials Business File</collection><collection>Mechanical & Transportation Engineering Abstracts</collection><collection>Neurosciences Abstracts</collection><collection>Solid State and Superconductivity Abstracts</collection><collection>Health & Medical Collection</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>Computing Database (Alumni Edition)</collection><collection>METADEX</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Natural Science Collection</collection><collection>Hospital Premium Collection</collection><collection>Hospital Premium Collection (Alumni Edition)</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies & Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>Biological Science Collection</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>Natural Science Collection</collection><collection>ProQuest One Community College</collection><collection>Middle East & Africa Database</collection><collection>ProQuest Central Korea</collection><collection>ANTE: Abstracts in New Technology & Engineering</collection><collection>Engineering Research Database</collection><collection>Health Research Premium Collection</collection><collection>Health Research Premium Collection (Alumni)</collection><collection>ProQuest Central Student</collection><collection>Aerospace Database</collection><collection>Copper Technical Reference Library</collection><collection>SciTech Premium Collection</collection><collection>Materials Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Computer Science Database</collection><collection>ProQuest Health & Medical Complete (Alumni)</collection><collection>Civil Engineering Abstracts</collection><collection>ProQuest Engineering Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>ProQuest Biological Science Collection</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>Computing Database</collection><collection>Health & Medical Collection (Alumni Edition)</collection><collection>Medical Database</collection><collection>Biological Science Database</collection><collection>Engineering Database</collection><collection>Advanced Technologies & Aerospace Database</collection><collection>ProQuest Advanced Technologies & Aerospace Collection</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>ProQuest One Psychology</collection><collection>Engineering Collection</collection><collection>ProQuest Central Basic</collection><collection>MEDLINE - Academic</collection><collection>PubMed Central (Full Participant titles)</collection><jtitle>Computational intelligence and neuroscience</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Zheng, Jianwei</au><au>Guan, Qiu</au><au>Yang, Ping</au><au>Zhao, Yanwei</au><au>Wang, Wan Liang</au><au>Lo Bosco, Giosuè</au><au>Giosuè Lo Bosco</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Image Denoising Using Sparsifying Transform Learning and Weighted Singular Values Minimization</atitle><jtitle>Computational intelligence and neuroscience</jtitle><date>2020</date><risdate>2020</risdate><volume>2020</volume><issue>2020</issue><spage>1</spage><epage>12</epage><pages>1-12</pages><issn>1687-5265</issn><eissn>1687-5273</eissn><abstract>In image denoising (IDN) processing, the low-rank property is usually considered as an important image prior. As a convex relaxation approximation of low rank, nuclear norm-based algorithms and their variants have attracted a significant attention. These algorithms can be collectively called image domain-based methods whose common drawback is the requirement of great number of iterations for some acceptable solution. Meanwhile, the sparsity of images in a certain transform domain has also been exploited in image denoising problems. Sparsity transform learning algorithms can achieve extremely fast computations as well as desirable performance. By taking both advantages of image domain and transform domain in a general framework, we propose a sparsifying transform learning and weighted singular values minimization method (STLWSM) for IDN problems. The proposed method can make full use of the preponderance of both domains. For solving the nonconvex cost function, we also present an efficient alternative solution for acceleration. Experimental results show that the proposed STLWSM achieves improvement both visually and quantitatively with a large margin over state-of-the-art approaches based on an alternatively single domain. It also needs much less iteration than all the image domain algorithms.</abstract><cop>Cairo, Egypt</cop><pub>Hindawi Publishing Corporation</pub><pmid>32849865</pmid><doi>10.1155/2020/8392032</doi><tpages>12</tpages><orcidid>https://orcid.org/0000-0002-1552-5075</orcidid><orcidid>https://orcid.org/0000-0002-1896-7135</orcidid><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 1687-5265 |
ispartof | Computational intelligence and neuroscience, 2020, Vol.2020 (2020), p.1-12 |
issn | 1687-5265 1687-5273 |
language | eng |
recordid | cdi_pubmedcentral_primary_oai_pubmedcentral_nih_gov_7439773 |
source | EZB-FREE-00999 freely available EZB journals; Wiley Online Library (Open Access Collection); PubMed Central; Alma/SFX Local Collection; PubMed Central Open Access |
subjects | Algorithms Cost function Data mining Dictionaries Domains Iterative methods Learning Learning algorithms Machine learning Noise Noise reduction Optimization Researchers Sparsity Success |
title | Image Denoising Using Sparsifying Transform Learning and Weighted Singular Values Minimization |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-24T18%3A33%3A29IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-gale_pubme&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Image%20Denoising%20Using%20Sparsifying%20Transform%20Learning%20and%20Weighted%20Singular%20Values%20Minimization&rft.jtitle=Computational%20intelligence%20and%20neuroscience&rft.au=Zheng,%20Jianwei&rft.date=2020&rft.volume=2020&rft.issue=2020&rft.spage=1&rft.epage=12&rft.pages=1-12&rft.issn=1687-5265&rft.eissn=1687-5273&rft_id=info:doi/10.1155/2020/8392032&rft_dat=%3Cgale_pubme%3EA639994232%3C/gale_pubme%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2434371508&rft_id=info:pmid/32849865&rft_galeid=A639994232&rfr_iscdi=true |