Joint demosaicking and denoising benefits from a two-stage training strategy

Image demosaicking and denoising are the first two key steps of the color image production pipeline. The classical processing sequence has for a long time consisted of applying denoising first, and then demosaicking. Applying the operations in this order leads to oversmoothing and checkerboard effec...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Journal of computational and applied mathematics 2023-12, Vol.434, p.115330, Article 115330
Hauptverfasser: Guo, Yu, Jin, Qiyu, Morel, Jean-Michel, Zeng, Tieyong, Facciolo, Gabriele
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page 115330
container_title Journal of computational and applied mathematics
container_volume 434
creator Guo, Yu
Jin, Qiyu
Morel, Jean-Michel
Zeng, Tieyong
Facciolo, Gabriele
description Image demosaicking and denoising are the first two key steps of the color image production pipeline. The classical processing sequence has for a long time consisted of applying denoising first, and then demosaicking. Applying the operations in this order leads to oversmoothing and checkerboard effects. Yet, it was difficult to change this order, because once the image is demosaicked, the statistical properties of the noise are dramatically changed and hard to handle by traditional denoising models. In this paper, we address this problem by a hybrid machine learning method. We invert the traditional color filter array (CFA) processing pipeline by first demosaicking and then denoising. Our demosaicking algorithm, trained on noiseless images, combines a traditional method and a residual convolutional neural network (CNN). This first stage retains all known information, which is the key point to obtain faithful final results. The noisy demosaicked image is then passed through a second CNN restoring a noiseless full-color image. This pipeline order completely avoids checkerboard effects and restores fine image detail. Although CNNs can be trained to solve jointly demosaicking–denoising end-to-end, we find that this two-stage training performs better and is less prone to failure. It is shown experimentally to improve on the state of the art, both quantitatively and in terms of visual quality.
doi_str_mv 10.1016/j.cam.2023.115330
format Article
fullrecord <record><control><sourceid>hal_cross</sourceid><recordid>TN_cdi_hal_primary_oai_HAL_hal_04150025v1</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><els_id>S0377042723002741</els_id><sourcerecordid>oai_HAL_hal_04150025v1</sourcerecordid><originalsourceid>FETCH-LOGICAL-c374t-111f703cc88f578f49ef50b2722803e63173a6d88c1e9346811babe88e2376553</originalsourceid><addsrcrecordid>eNp9kE1LxDAQhoMouK7-AG-9emjNJG2T4mlZ1FUKXvQc0nSyZt1tJAkr_ntbKh49zQfvMzAPIddAC6BQ3-4Kow8Fo4wXABXn9IQsQIomByHkKVlQLkROSybOyUWMO0pp3UC5IO2zd0PKejz4qJ35cMM200M_Lgbv4jR1OKB1KWY2-EOms_Tl85j0FrMUtBumSBy7hNvvS3Jm9T7i1W9dkreH-9f1Jm9fHp_WqzY3XJQpBwArKDdGSlsJacsGbUU7JhiTlGPNQXBd91IawIaXtQTodIdSIuOiriq-JDfz3Xe9V5_BHXT4Vl47tVm1atrREipKWXWEMQtz1gQfY0D7BwBVkzq1U6M6NalTs7qRuZsZHJ84OgwqGoeDwd4FNEn13v1D_wBHjXUM</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Joint demosaicking and denoising benefits from a two-stage training strategy</title><source>Access via ScienceDirect (Elsevier)</source><creator>Guo, Yu ; Jin, Qiyu ; Morel, Jean-Michel ; Zeng, Tieyong ; Facciolo, Gabriele</creator><creatorcontrib>Guo, Yu ; Jin, Qiyu ; Morel, Jean-Michel ; Zeng, Tieyong ; Facciolo, Gabriele</creatorcontrib><description>Image demosaicking and denoising are the first two key steps of the color image production pipeline. The classical processing sequence has for a long time consisted of applying denoising first, and then demosaicking. Applying the operations in this order leads to oversmoothing and checkerboard effects. Yet, it was difficult to change this order, because once the image is demosaicked, the statistical properties of the noise are dramatically changed and hard to handle by traditional denoising models. In this paper, we address this problem by a hybrid machine learning method. We invert the traditional color filter array (CFA) processing pipeline by first demosaicking and then denoising. Our demosaicking algorithm, trained on noiseless images, combines a traditional method and a residual convolutional neural network (CNN). This first stage retains all known information, which is the key point to obtain faithful final results. The noisy demosaicked image is then passed through a second CNN restoring a noiseless full-color image. This pipeline order completely avoids checkerboard effects and restores fine image detail. Although CNNs can be trained to solve jointly demosaicking–denoising end-to-end, we find that this two-stage training performs better and is less prone to failure. It is shown experimentally to improve on the state of the art, both quantitatively and in terms of visual quality.</description><identifier>ISSN: 0377-0427</identifier><identifier>EISSN: 1879-1778</identifier><identifier>DOI: 10.1016/j.cam.2023.115330</identifier><language>eng</language><publisher>Elsevier B.V</publisher><subject>Computer Science ; Convolutional neural networks ; Demosaicking ; Denoising ; Pipeline ; Residual ; Signal and Image Processing</subject><ispartof>Journal of computational and applied mathematics, 2023-12, Vol.434, p.115330, Article 115330</ispartof><rights>2023 Elsevier B.V.</rights><rights>Distributed under a Creative Commons Attribution 4.0 International License</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c374t-111f703cc88f578f49ef50b2722803e63173a6d88c1e9346811babe88e2376553</citedby><cites>FETCH-LOGICAL-c374t-111f703cc88f578f49ef50b2722803e63173a6d88c1e9346811babe88e2376553</cites><orcidid>0000-0001-8639-233X</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://dx.doi.org/10.1016/j.cam.2023.115330$$EHTML$$P50$$Gelsevier$$H</linktohtml><link.rule.ids>230,315,781,785,886,3551,27929,27930,46000</link.rule.ids><backlink>$$Uhttps://hal.science/hal-04150025$$DView record in HAL$$Hfree_for_read</backlink></links><search><creatorcontrib>Guo, Yu</creatorcontrib><creatorcontrib>Jin, Qiyu</creatorcontrib><creatorcontrib>Morel, Jean-Michel</creatorcontrib><creatorcontrib>Zeng, Tieyong</creatorcontrib><creatorcontrib>Facciolo, Gabriele</creatorcontrib><title>Joint demosaicking and denoising benefits from a two-stage training strategy</title><title>Journal of computational and applied mathematics</title><description>Image demosaicking and denoising are the first two key steps of the color image production pipeline. The classical processing sequence has for a long time consisted of applying denoising first, and then demosaicking. Applying the operations in this order leads to oversmoothing and checkerboard effects. Yet, it was difficult to change this order, because once the image is demosaicked, the statistical properties of the noise are dramatically changed and hard to handle by traditional denoising models. In this paper, we address this problem by a hybrid machine learning method. We invert the traditional color filter array (CFA) processing pipeline by first demosaicking and then denoising. Our demosaicking algorithm, trained on noiseless images, combines a traditional method and a residual convolutional neural network (CNN). This first stage retains all known information, which is the key point to obtain faithful final results. The noisy demosaicked image is then passed through a second CNN restoring a noiseless full-color image. This pipeline order completely avoids checkerboard effects and restores fine image detail. Although CNNs can be trained to solve jointly demosaicking–denoising end-to-end, we find that this two-stage training performs better and is less prone to failure. It is shown experimentally to improve on the state of the art, both quantitatively and in terms of visual quality.</description><subject>Computer Science</subject><subject>Convolutional neural networks</subject><subject>Demosaicking</subject><subject>Denoising</subject><subject>Pipeline</subject><subject>Residual</subject><subject>Signal and Image Processing</subject><issn>0377-0427</issn><issn>1879-1778</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><recordid>eNp9kE1LxDAQhoMouK7-AG-9emjNJG2T4mlZ1FUKXvQc0nSyZt1tJAkr_ntbKh49zQfvMzAPIddAC6BQ3-4Kow8Fo4wXABXn9IQsQIomByHkKVlQLkROSybOyUWMO0pp3UC5IO2zd0PKejz4qJ35cMM200M_Lgbv4jR1OKB1KWY2-EOms_Tl85j0FrMUtBumSBy7hNvvS3Jm9T7i1W9dkreH-9f1Jm9fHp_WqzY3XJQpBwArKDdGSlsJacsGbUU7JhiTlGPNQXBd91IawIaXtQTodIdSIuOiriq-JDfz3Xe9V5_BHXT4Vl47tVm1atrREipKWXWEMQtz1gQfY0D7BwBVkzq1U6M6NalTs7qRuZsZHJ84OgwqGoeDwd4FNEn13v1D_wBHjXUM</recordid><startdate>20231215</startdate><enddate>20231215</enddate><creator>Guo, Yu</creator><creator>Jin, Qiyu</creator><creator>Morel, Jean-Michel</creator><creator>Zeng, Tieyong</creator><creator>Facciolo, Gabriele</creator><general>Elsevier B.V</general><general>Elsevier</general><scope>AAYXX</scope><scope>CITATION</scope><scope>1XC</scope><scope>VOOES</scope><orcidid>https://orcid.org/0000-0001-8639-233X</orcidid></search><sort><creationdate>20231215</creationdate><title>Joint demosaicking and denoising benefits from a two-stage training strategy</title><author>Guo, Yu ; Jin, Qiyu ; Morel, Jean-Michel ; Zeng, Tieyong ; Facciolo, Gabriele</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c374t-111f703cc88f578f49ef50b2722803e63173a6d88c1e9346811babe88e2376553</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science</topic><topic>Convolutional neural networks</topic><topic>Demosaicking</topic><topic>Denoising</topic><topic>Pipeline</topic><topic>Residual</topic><topic>Signal and Image Processing</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Guo, Yu</creatorcontrib><creatorcontrib>Jin, Qiyu</creatorcontrib><creatorcontrib>Morel, Jean-Michel</creatorcontrib><creatorcontrib>Zeng, Tieyong</creatorcontrib><creatorcontrib>Facciolo, Gabriele</creatorcontrib><collection>CrossRef</collection><collection>Hyper Article en Ligne (HAL)</collection><collection>Hyper Article en Ligne (HAL) (Open Access)</collection><jtitle>Journal of computational and applied mathematics</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Guo, Yu</au><au>Jin, Qiyu</au><au>Morel, Jean-Michel</au><au>Zeng, Tieyong</au><au>Facciolo, Gabriele</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Joint demosaicking and denoising benefits from a two-stage training strategy</atitle><jtitle>Journal of computational and applied mathematics</jtitle><date>2023-12-15</date><risdate>2023</risdate><volume>434</volume><spage>115330</spage><pages>115330-</pages><artnum>115330</artnum><issn>0377-0427</issn><eissn>1879-1778</eissn><abstract>Image demosaicking and denoising are the first two key steps of the color image production pipeline. The classical processing sequence has for a long time consisted of applying denoising first, and then demosaicking. Applying the operations in this order leads to oversmoothing and checkerboard effects. Yet, it was difficult to change this order, because once the image is demosaicked, the statistical properties of the noise are dramatically changed and hard to handle by traditional denoising models. In this paper, we address this problem by a hybrid machine learning method. We invert the traditional color filter array (CFA) processing pipeline by first demosaicking and then denoising. Our demosaicking algorithm, trained on noiseless images, combines a traditional method and a residual convolutional neural network (CNN). This first stage retains all known information, which is the key point to obtain faithful final results. The noisy demosaicked image is then passed through a second CNN restoring a noiseless full-color image. This pipeline order completely avoids checkerboard effects and restores fine image detail. Although CNNs can be trained to solve jointly demosaicking–denoising end-to-end, we find that this two-stage training performs better and is less prone to failure. It is shown experimentally to improve on the state of the art, both quantitatively and in terms of visual quality.</abstract><pub>Elsevier B.V</pub><doi>10.1016/j.cam.2023.115330</doi><orcidid>https://orcid.org/0000-0001-8639-233X</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 0377-0427
ispartof Journal of computational and applied mathematics, 2023-12, Vol.434, p.115330, Article 115330
issn 0377-0427
1879-1778
language eng
recordid cdi_hal_primary_oai_HAL_hal_04150025v1
source Access via ScienceDirect (Elsevier)
subjects Computer Science
Convolutional neural networks
Demosaicking
Denoising
Pipeline
Residual
Signal and Image Processing
title Joint demosaicking and denoising benefits from a two-stage training strategy
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-13T09%3A31%3A07IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-hal_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Joint%20demosaicking%20and%20denoising%20benefits%20from%20a%20two-stage%20training%20strategy&rft.jtitle=Journal%20of%20computational%20and%20applied%20mathematics&rft.au=Guo,%20Yu&rft.date=2023-12-15&rft.volume=434&rft.spage=115330&rft.pages=115330-&rft.artnum=115330&rft.issn=0377-0427&rft.eissn=1879-1778&rft_id=info:doi/10.1016/j.cam.2023.115330&rft_dat=%3Chal_cross%3Eoai_HAL_hal_04150025v1%3C/hal_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rft_els_id=S0377042723002741&rfr_iscdi=true