NIRN: Self-supervised noisy image reconstruction network for real-world image denoising
Existing image denoising methods for synthetic noise have made great progress. However, the distribution of real-world noise is more complicated, and it is difficult to obtain noise-free images of training sets for deep learning. Although there have been a few attempts in training with only the inpu...
Gespeichert in:
Veröffentlicht in: | Applied intelligence (Dordrecht, Netherlands) Netherlands), 2022-11, Vol.52 (14), p.16683-16700 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 16700 |
---|---|
container_issue | 14 |
container_start_page | 16683 |
container_title | Applied intelligence (Dordrecht, Netherlands) |
container_volume | 52 |
creator | Li, Xiaopeng Fan, Cien Zhao, Chen Zou, Lian Tian, Sheng |
description | Existing image denoising methods for synthetic noise have made great progress. However, the distribution of real-world noise is more complicated, and it is difficult to obtain noise-free images of training sets for deep learning. Although there have been a few attempts in training with only the input noisy images, they have not achieved satisfactory results in real-world image denoising. Based on various priors of noisy images, we propose a novel Noisy Image Reconstruction Network (NIRN) which have an excellent performance with one input noisy image. The network is mainly composed of a clean image generator and a noise generator to separate the image into two latent layers, a noise layer and a noise-free layer. We constrain the two generators with deep image prior and noise prior, and conduct their adversarial training process with the reconstruction loss to exclude the possibility of overfitting. Besides, our method also supports multi-frame image denoising, which can make full use of the noise randomness between frames to get better results. Extensive experiments have demonstrated the superiority of our method NIRN over the state-of-the-art on both synthetic noise and real-world noise, in terms of both visual effect and quantitative metrics. |
doi_str_mv | 10.1007/s10489-022-03333-6 |
format | Article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2734437015</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2734437015</sourcerecordid><originalsourceid>FETCH-LOGICAL-c319t-aadd4c9a4ff0707a4f74fd8bde7cb1f7216f23192105f86ba1d6fa91f546b5a23</originalsourceid><addsrcrecordid>eNp9kE1LAzEQhoMoWKt_wNOC5-jkY5OuNylaC6WCH-gtZDdJ2bpuarKr9N-bugVvzuVlmOedGV6EzglcEgB5FQnwSYGBUgwsFRYHaERyybDkhTxEIygox0IUb8foJMY1QMKAjNDrcv64vM6ebONw7Dc2fNXRmqz1ddxm9Yde2SzYyrexC33V1b7NWtt9-_CeOR_SSDc4dY3Zs8bunHW7OkVHTjfRnu11jF7ubp-n93jxMJtPbxa4YqTosNbG8KrQ3DmQIJNK7sykNFZWJXGSEuFoIimB3E1EqYkRThfE5VyUuaZsjC6GvZvgP3sbO7X2fWjTSUUl45xJIHmi6EBVwccYrFObkB4OW0VA7QJUQ4AqBah-A1QimdhgigluVzb8rf7H9QPR1HSo</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2734437015</pqid></control><display><type>article</type><title>NIRN: Self-supervised noisy image reconstruction network for real-world image denoising</title><source>SpringerNature Journals</source><creator>Li, Xiaopeng ; Fan, Cien ; Zhao, Chen ; Zou, Lian ; Tian, Sheng</creator><creatorcontrib>Li, Xiaopeng ; Fan, Cien ; Zhao, Chen ; Zou, Lian ; Tian, Sheng</creatorcontrib><description>Existing image denoising methods for synthetic noise have made great progress. However, the distribution of real-world noise is more complicated, and it is difficult to obtain noise-free images of training sets for deep learning. Although there have been a few attempts in training with only the input noisy images, they have not achieved satisfactory results in real-world image denoising. Based on various priors of noisy images, we propose a novel Noisy Image Reconstruction Network (NIRN) which have an excellent performance with one input noisy image. The network is mainly composed of a clean image generator and a noise generator to separate the image into two latent layers, a noise layer and a noise-free layer. We constrain the two generators with deep image prior and noise prior, and conduct their adversarial training process with the reconstruction loss to exclude the possibility of overfitting. Besides, our method also supports multi-frame image denoising, which can make full use of the noise randomness between frames to get better results. Extensive experiments have demonstrated the superiority of our method NIRN over the state-of-the-art on both synthetic noise and real-world noise, in terms of both visual effect and quantitative metrics.</description><identifier>ISSN: 0924-669X</identifier><identifier>EISSN: 1573-7497</identifier><identifier>DOI: 10.1007/s10489-022-03333-6</identifier><language>eng</language><publisher>New York: Springer US</publisher><subject>Algorithms ; Artificial Intelligence ; Computer Science ; Deep learning ; Image processors ; Image reconstruction ; Machines ; Manufacturing ; Mechanical Engineering ; Noise generators ; Noise reduction ; Processes ; Teaching methods ; Training ; Visual effects</subject><ispartof>Applied intelligence (Dordrecht, Netherlands), 2022-11, Vol.52 (14), p.16683-16700</ispartof><rights>The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2022</rights><rights>The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2022.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c319t-aadd4c9a4ff0707a4f74fd8bde7cb1f7216f23192105f86ba1d6fa91f546b5a23</citedby><cites>FETCH-LOGICAL-c319t-aadd4c9a4ff0707a4f74fd8bde7cb1f7216f23192105f86ba1d6fa91f546b5a23</cites><orcidid>0000-0002-4973-6444</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s10489-022-03333-6$$EPDF$$P50$$Gspringer$$H</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1007/s10489-022-03333-6$$EHTML$$P50$$Gspringer$$H</linktohtml><link.rule.ids>314,780,784,27924,27925,41488,42557,51319</link.rule.ids></links><search><creatorcontrib>Li, Xiaopeng</creatorcontrib><creatorcontrib>Fan, Cien</creatorcontrib><creatorcontrib>Zhao, Chen</creatorcontrib><creatorcontrib>Zou, Lian</creatorcontrib><creatorcontrib>Tian, Sheng</creatorcontrib><title>NIRN: Self-supervised noisy image reconstruction network for real-world image denoising</title><title>Applied intelligence (Dordrecht, Netherlands)</title><addtitle>Appl Intell</addtitle><description>Existing image denoising methods for synthetic noise have made great progress. However, the distribution of real-world noise is more complicated, and it is difficult to obtain noise-free images of training sets for deep learning. Although there have been a few attempts in training with only the input noisy images, they have not achieved satisfactory results in real-world image denoising. Based on various priors of noisy images, we propose a novel Noisy Image Reconstruction Network (NIRN) which have an excellent performance with one input noisy image. The network is mainly composed of a clean image generator and a noise generator to separate the image into two latent layers, a noise layer and a noise-free layer. We constrain the two generators with deep image prior and noise prior, and conduct their adversarial training process with the reconstruction loss to exclude the possibility of overfitting. Besides, our method also supports multi-frame image denoising, which can make full use of the noise randomness between frames to get better results. Extensive experiments have demonstrated the superiority of our method NIRN over the state-of-the-art on both synthetic noise and real-world noise, in terms of both visual effect and quantitative metrics.</description><subject>Algorithms</subject><subject>Artificial Intelligence</subject><subject>Computer Science</subject><subject>Deep learning</subject><subject>Image processors</subject><subject>Image reconstruction</subject><subject>Machines</subject><subject>Manufacturing</subject><subject>Mechanical Engineering</subject><subject>Noise generators</subject><subject>Noise reduction</subject><subject>Processes</subject><subject>Teaching methods</subject><subject>Training</subject><subject>Visual effects</subject><issn>0924-669X</issn><issn>1573-7497</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GNUQQ</sourceid><recordid>eNp9kE1LAzEQhoMoWKt_wNOC5-jkY5OuNylaC6WCH-gtZDdJ2bpuarKr9N-bugVvzuVlmOedGV6EzglcEgB5FQnwSYGBUgwsFRYHaERyybDkhTxEIygox0IUb8foJMY1QMKAjNDrcv64vM6ebONw7Dc2fNXRmqz1ddxm9Yde2SzYyrexC33V1b7NWtt9-_CeOR_SSDc4dY3Zs8bunHW7OkVHTjfRnu11jF7ubp-n93jxMJtPbxa4YqTosNbG8KrQ3DmQIJNK7sykNFZWJXGSEuFoIimB3E1EqYkRThfE5VyUuaZsjC6GvZvgP3sbO7X2fWjTSUUl45xJIHmi6EBVwccYrFObkB4OW0VA7QJUQ4AqBah-A1QimdhgigluVzb8rf7H9QPR1HSo</recordid><startdate>20221101</startdate><enddate>20221101</enddate><creator>Li, Xiaopeng</creator><creator>Fan, Cien</creator><creator>Zhao, Chen</creator><creator>Zou, Lian</creator><creator>Tian, Sheng</creator><general>Springer US</general><general>Springer Nature B.V</general><scope>AAYXX</scope><scope>CITATION</scope><scope>3V.</scope><scope>7SC</scope><scope>7WY</scope><scope>7WZ</scope><scope>7XB</scope><scope>87Z</scope><scope>8AL</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>8FK</scope><scope>8FL</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BEZIV</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>FRNLG</scope><scope>F~G</scope><scope>GNUQQ</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K60</scope><scope>K6~</scope><scope>K7-</scope><scope>L.-</scope><scope>L6V</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>M0C</scope><scope>M0N</scope><scope>M7S</scope><scope>P5Z</scope><scope>P62</scope><scope>PQBIZ</scope><scope>PQBZA</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PSYQQ</scope><scope>PTHSS</scope><scope>Q9U</scope><orcidid>https://orcid.org/0000-0002-4973-6444</orcidid></search><sort><creationdate>20221101</creationdate><title>NIRN: Self-supervised noisy image reconstruction network for real-world image denoising</title><author>Li, Xiaopeng ; Fan, Cien ; Zhao, Chen ; Zou, Lian ; Tian, Sheng</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c319t-aadd4c9a4ff0707a4f74fd8bde7cb1f7216f23192105f86ba1d6fa91f546b5a23</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Algorithms</topic><topic>Artificial Intelligence</topic><topic>Computer Science</topic><topic>Deep learning</topic><topic>Image processors</topic><topic>Image reconstruction</topic><topic>Machines</topic><topic>Manufacturing</topic><topic>Mechanical Engineering</topic><topic>Noise generators</topic><topic>Noise reduction</topic><topic>Processes</topic><topic>Teaching methods</topic><topic>Training</topic><topic>Visual effects</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Li, Xiaopeng</creatorcontrib><creatorcontrib>Fan, Cien</creatorcontrib><creatorcontrib>Zhao, Chen</creatorcontrib><creatorcontrib>Zou, Lian</creatorcontrib><creatorcontrib>Tian, Sheng</creatorcontrib><collection>CrossRef</collection><collection>ProQuest Central (Corporate)</collection><collection>Computer and Information Systems Abstracts</collection><collection>Access via ABI/INFORM (ProQuest)</collection><collection>ABI/INFORM Global (PDF only)</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>ABI/INFORM Global (Alumni Edition)</collection><collection>Computing Database (Alumni Edition)</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>ABI/INFORM Collection (Alumni Edition)</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies & Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Business Premium Collection</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>Business Premium Collection (Alumni)</collection><collection>ABI/INFORM Global (Corporate)</collection><collection>ProQuest Central Student</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Computer Science Collection</collection><collection>ProQuest Business Collection (Alumni Edition)</collection><collection>ProQuest Business Collection</collection><collection>Computer Science Database</collection><collection>ABI/INFORM Professional Advanced</collection><collection>ProQuest Engineering Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>ABI/INFORM Global</collection><collection>Computing Database</collection><collection>Engineering Database</collection><collection>Advanced Technologies & Aerospace Database</collection><collection>ProQuest Advanced Technologies & Aerospace Collection</collection><collection>ProQuest One Business</collection><collection>ProQuest One Business (Alumni)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>ProQuest One Psychology</collection><collection>Engineering Collection</collection><collection>ProQuest Central Basic</collection><jtitle>Applied intelligence (Dordrecht, Netherlands)</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Li, Xiaopeng</au><au>Fan, Cien</au><au>Zhao, Chen</au><au>Zou, Lian</au><au>Tian, Sheng</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>NIRN: Self-supervised noisy image reconstruction network for real-world image denoising</atitle><jtitle>Applied intelligence (Dordrecht, Netherlands)</jtitle><stitle>Appl Intell</stitle><date>2022-11-01</date><risdate>2022</risdate><volume>52</volume><issue>14</issue><spage>16683</spage><epage>16700</epage><pages>16683-16700</pages><issn>0924-669X</issn><eissn>1573-7497</eissn><abstract>Existing image denoising methods for synthetic noise have made great progress. However, the distribution of real-world noise is more complicated, and it is difficult to obtain noise-free images of training sets for deep learning. Although there have been a few attempts in training with only the input noisy images, they have not achieved satisfactory results in real-world image denoising. Based on various priors of noisy images, we propose a novel Noisy Image Reconstruction Network (NIRN) which have an excellent performance with one input noisy image. The network is mainly composed of a clean image generator and a noise generator to separate the image into two latent layers, a noise layer and a noise-free layer. We constrain the two generators with deep image prior and noise prior, and conduct their adversarial training process with the reconstruction loss to exclude the possibility of overfitting. Besides, our method also supports multi-frame image denoising, which can make full use of the noise randomness between frames to get better results. Extensive experiments have demonstrated the superiority of our method NIRN over the state-of-the-art on both synthetic noise and real-world noise, in terms of both visual effect and quantitative metrics.</abstract><cop>New York</cop><pub>Springer US</pub><doi>10.1007/s10489-022-03333-6</doi><tpages>18</tpages><orcidid>https://orcid.org/0000-0002-4973-6444</orcidid></addata></record> |
fulltext | fulltext |
identifier | ISSN: 0924-669X |
ispartof | Applied intelligence (Dordrecht, Netherlands), 2022-11, Vol.52 (14), p.16683-16700 |
issn | 0924-669X 1573-7497 |
language | eng |
recordid | cdi_proquest_journals_2734437015 |
source | SpringerNature Journals |
subjects | Algorithms Artificial Intelligence Computer Science Deep learning Image processors Image reconstruction Machines Manufacturing Mechanical Engineering Noise generators Noise reduction Processes Teaching methods Training Visual effects |
title | NIRN: Self-supervised noisy image reconstruction network for real-world image denoising |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-23T05%3A52%3A05IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=NIRN:%20Self-supervised%20noisy%20image%20reconstruction%20network%20for%20real-world%20image%20denoising&rft.jtitle=Applied%20intelligence%20(Dordrecht,%20Netherlands)&rft.au=Li,%20Xiaopeng&rft.date=2022-11-01&rft.volume=52&rft.issue=14&rft.spage=16683&rft.epage=16700&rft.pages=16683-16700&rft.issn=0924-669X&rft.eissn=1573-7497&rft_id=info:doi/10.1007/s10489-022-03333-6&rft_dat=%3Cproquest_cross%3E2734437015%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2734437015&rft_id=info:pmid/&rfr_iscdi=true |