Convolutional analysis operator learning for multifocus image fusion

Sparse representation (SR), convolutional sparse representation (CSR) and convolutional dictionary learning (CDL) are synthetic-based priors that have proven to be successful in signal inverse problems (such as multifocus image fusion). Unlike “synthesis” formulas, “analysis” model assigns probabili...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Signal processing. Image communication 2022-04, Vol.103, p.116632, Article 116632
Hauptverfasser: Zhang, Chengfang, Feng, Ziliang
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page 116632
container_title Signal processing. Image communication
container_volume 103
creator Zhang, Chengfang
Feng, Ziliang
description Sparse representation (SR), convolutional sparse representation (CSR) and convolutional dictionary learning (CDL) are synthetic-based priors that have proven to be successful in signal inverse problems (such as multifocus image fusion). Unlike “synthesis” formulas, “analysis” model assigns probabilities to signals through various forward measurements of signals. Analysis operator learning (AOL) is a classical analysis-based learning method. Convolutional analysis operator learning (CAOL) is convolutional form of AOL. CAOL uses unsupervised learning method to train autocoded convolutional neural network (CNN) to more accurately solve inverse problem. From the perspective of CAOL, this paper introduces learned convolutional regularizers into multifocus image fusion and proposes CAOL-based multifocus image fusion algorithm. In the CDL stage, convergent block proximal extrapolated gradient method with majorizer (BPEG-M) and adaptive momentum restarting scheme are used. In the sparse fusion stage, alternating direction method of multipliers (ADMM) approach with convolutional basis pursuit denoising (CBPDN) and l1 norm maximum strategy are employed for high-frequency and low-frequency component, respectively. 3 types of multifocus images (static gray images, gray images in sports and color images) are tested to verify performance of the proposed method. A comparison with representative methods demonstrates superiority of our method in terms of subjective observation and objective evaluation. •A new multi-focus image fusion framework CAOL-based is proposed.•The different rules are used to alleviate fusion defect of connection areas.•The fusion performance under different filters with CAOL is discussed.
doi_str_mv 10.1016/j.image.2022.116632
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2648262689</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><els_id>S0923596522000030</els_id><sourcerecordid>2648262689</sourcerecordid><originalsourceid>FETCH-LOGICAL-c331t-284d05ba0956970f26f4df38b49efdb39ee88436eaefebac46258e90b38272c43</originalsourceid><addsrcrecordid>eNp9kMtOwzAQRS0EEqXwBWwisU7xK469YIHKU6rEBtaW44wrR2lc7KRS_x7TsGYzo5HumblzEboleEUwEffdyu_MFlYUU7oiRAhGz9CCyFqVVNT1OVpgRVlZKVFdoquUOowx5Vgt0NM6DIfQT6MPg-kLk8sx-VSEPUQzhlj0YOLgh23h8rCb-tG7YKdUnA4WbkoZvEYXzvQJbv76En29PH-u38rNx-v7-nFTWsbIWFLJW1w1BqtKqBo7KhxvHZMNV-DahikAKTkTYMBBYywXtJKgcMMkranlbInu5r37GL4nSKPuwhSz5aSp4JIKKqTKKjarbAwpRXB6H7PbeNQE69-4dKdP7vVvXHqOK1MPMwX5gYOHqJP1MFhofQQ76jb4f_kfVsZ01A</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2648262689</pqid></control><display><type>article</type><title>Convolutional analysis operator learning for multifocus image fusion</title><source>Elsevier ScienceDirect Journals</source><creator>Zhang, Chengfang ; Feng, Ziliang</creator><creatorcontrib>Zhang, Chengfang ; Feng, Ziliang</creatorcontrib><description>Sparse representation (SR), convolutional sparse representation (CSR) and convolutional dictionary learning (CDL) are synthetic-based priors that have proven to be successful in signal inverse problems (such as multifocus image fusion). Unlike “synthesis” formulas, “analysis” model assigns probabilities to signals through various forward measurements of signals. Analysis operator learning (AOL) is a classical analysis-based learning method. Convolutional analysis operator learning (CAOL) is convolutional form of AOL. CAOL uses unsupervised learning method to train autocoded convolutional neural network (CNN) to more accurately solve inverse problem. From the perspective of CAOL, this paper introduces learned convolutional regularizers into multifocus image fusion and proposes CAOL-based multifocus image fusion algorithm. In the CDL stage, convergent block proximal extrapolated gradient method with majorizer (BPEG-M) and adaptive momentum restarting scheme are used. In the sparse fusion stage, alternating direction method of multipliers (ADMM) approach with convolutional basis pursuit denoising (CBPDN) and l1 norm maximum strategy are employed for high-frequency and low-frequency component, respectively. 3 types of multifocus images (static gray images, gray images in sports and color images) are tested to verify performance of the proposed method. A comparison with representative methods demonstrates superiority of our method in terms of subjective observation and objective evaluation. •A new multi-focus image fusion framework CAOL-based is proposed.•The different rules are used to alleviate fusion defect of connection areas.•The fusion performance under different filters with CAOL is discussed.</description><identifier>ISSN: 0923-5965</identifier><identifier>EISSN: 1879-2677</identifier><identifier>DOI: 10.1016/j.image.2022.116632</identifier><language>eng</language><publisher>Amsterdam: Elsevier B.V</publisher><subject>ADMM ; Algorithms ; Analysis-based signal model ; Artificial neural networks ; BPEG-M ; CBPDN ; Color imagery ; Computer vision ; Convolution analysis operator learning ; Image processing ; Inverse problems ; Maximum strategies ; Multifocus image fusion ; Representations ; Restarting ; Teaching methods ; Unsupervised learning</subject><ispartof>Signal processing. Image communication, 2022-04, Vol.103, p.116632, Article 116632</ispartof><rights>2022 Elsevier B.V.</rights><rights>Copyright Elsevier BV Apr 2022</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c331t-284d05ba0956970f26f4df38b49efdb39ee88436eaefebac46258e90b38272c43</citedby><cites>FETCH-LOGICAL-c331t-284d05ba0956970f26f4df38b49efdb39ee88436eaefebac46258e90b38272c43</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.sciencedirect.com/science/article/pii/S0923596522000030$$EHTML$$P50$$Gelsevier$$H</linktohtml><link.rule.ids>314,776,780,3537,27901,27902,65306</link.rule.ids></links><search><creatorcontrib>Zhang, Chengfang</creatorcontrib><creatorcontrib>Feng, Ziliang</creatorcontrib><title>Convolutional analysis operator learning for multifocus image fusion</title><title>Signal processing. Image communication</title><description>Sparse representation (SR), convolutional sparse representation (CSR) and convolutional dictionary learning (CDL) are synthetic-based priors that have proven to be successful in signal inverse problems (such as multifocus image fusion). Unlike “synthesis” formulas, “analysis” model assigns probabilities to signals through various forward measurements of signals. Analysis operator learning (AOL) is a classical analysis-based learning method. Convolutional analysis operator learning (CAOL) is convolutional form of AOL. CAOL uses unsupervised learning method to train autocoded convolutional neural network (CNN) to more accurately solve inverse problem. From the perspective of CAOL, this paper introduces learned convolutional regularizers into multifocus image fusion and proposes CAOL-based multifocus image fusion algorithm. In the CDL stage, convergent block proximal extrapolated gradient method with majorizer (BPEG-M) and adaptive momentum restarting scheme are used. In the sparse fusion stage, alternating direction method of multipliers (ADMM) approach with convolutional basis pursuit denoising (CBPDN) and l1 norm maximum strategy are employed for high-frequency and low-frequency component, respectively. 3 types of multifocus images (static gray images, gray images in sports and color images) are tested to verify performance of the proposed method. A comparison with representative methods demonstrates superiority of our method in terms of subjective observation and objective evaluation. •A new multi-focus image fusion framework CAOL-based is proposed.•The different rules are used to alleviate fusion defect of connection areas.•The fusion performance under different filters with CAOL is discussed.</description><subject>ADMM</subject><subject>Algorithms</subject><subject>Analysis-based signal model</subject><subject>Artificial neural networks</subject><subject>BPEG-M</subject><subject>CBPDN</subject><subject>Color imagery</subject><subject>Computer vision</subject><subject>Convolution analysis operator learning</subject><subject>Image processing</subject><subject>Inverse problems</subject><subject>Maximum strategies</subject><subject>Multifocus image fusion</subject><subject>Representations</subject><subject>Restarting</subject><subject>Teaching methods</subject><subject>Unsupervised learning</subject><issn>0923-5965</issn><issn>1879-2677</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><recordid>eNp9kMtOwzAQRS0EEqXwBWwisU7xK469YIHKU6rEBtaW44wrR2lc7KRS_x7TsGYzo5HumblzEboleEUwEffdyu_MFlYUU7oiRAhGz9CCyFqVVNT1OVpgRVlZKVFdoquUOowx5Vgt0NM6DIfQT6MPg-kLk8sx-VSEPUQzhlj0YOLgh23h8rCb-tG7YKdUnA4WbkoZvEYXzvQJbv76En29PH-u38rNx-v7-nFTWsbIWFLJW1w1BqtKqBo7KhxvHZMNV-DahikAKTkTYMBBYywXtJKgcMMkranlbInu5r37GL4nSKPuwhSz5aSp4JIKKqTKKjarbAwpRXB6H7PbeNQE69-4dKdP7vVvXHqOK1MPMwX5gYOHqJP1MFhofQQ76jb4f_kfVsZ01A</recordid><startdate>202204</startdate><enddate>202204</enddate><creator>Zhang, Chengfang</creator><creator>Feng, Ziliang</creator><general>Elsevier B.V</general><general>Elsevier BV</general><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope></search><sort><creationdate>202204</creationdate><title>Convolutional analysis operator learning for multifocus image fusion</title><author>Zhang, Chengfang ; Feng, Ziliang</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c331t-284d05ba0956970f26f4df38b49efdb39ee88436eaefebac46258e90b38272c43</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>ADMM</topic><topic>Algorithms</topic><topic>Analysis-based signal model</topic><topic>Artificial neural networks</topic><topic>BPEG-M</topic><topic>CBPDN</topic><topic>Color imagery</topic><topic>Computer vision</topic><topic>Convolution analysis operator learning</topic><topic>Image processing</topic><topic>Inverse problems</topic><topic>Maximum strategies</topic><topic>Multifocus image fusion</topic><topic>Representations</topic><topic>Restarting</topic><topic>Teaching methods</topic><topic>Unsupervised learning</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Zhang, Chengfang</creatorcontrib><creatorcontrib>Feng, Ziliang</creatorcontrib><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>Signal processing. Image communication</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Zhang, Chengfang</au><au>Feng, Ziliang</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Convolutional analysis operator learning for multifocus image fusion</atitle><jtitle>Signal processing. Image communication</jtitle><date>2022-04</date><risdate>2022</risdate><volume>103</volume><spage>116632</spage><pages>116632-</pages><artnum>116632</artnum><issn>0923-5965</issn><eissn>1879-2677</eissn><abstract>Sparse representation (SR), convolutional sparse representation (CSR) and convolutional dictionary learning (CDL) are synthetic-based priors that have proven to be successful in signal inverse problems (such as multifocus image fusion). Unlike “synthesis” formulas, “analysis” model assigns probabilities to signals through various forward measurements of signals. Analysis operator learning (AOL) is a classical analysis-based learning method. Convolutional analysis operator learning (CAOL) is convolutional form of AOL. CAOL uses unsupervised learning method to train autocoded convolutional neural network (CNN) to more accurately solve inverse problem. From the perspective of CAOL, this paper introduces learned convolutional regularizers into multifocus image fusion and proposes CAOL-based multifocus image fusion algorithm. In the CDL stage, convergent block proximal extrapolated gradient method with majorizer (BPEG-M) and adaptive momentum restarting scheme are used. In the sparse fusion stage, alternating direction method of multipliers (ADMM) approach with convolutional basis pursuit denoising (CBPDN) and l1 norm maximum strategy are employed for high-frequency and low-frequency component, respectively. 3 types of multifocus images (static gray images, gray images in sports and color images) are tested to verify performance of the proposed method. A comparison with representative methods demonstrates superiority of our method in terms of subjective observation and objective evaluation. •A new multi-focus image fusion framework CAOL-based is proposed.•The different rules are used to alleviate fusion defect of connection areas.•The fusion performance under different filters with CAOL is discussed.</abstract><cop>Amsterdam</cop><pub>Elsevier B.V</pub><doi>10.1016/j.image.2022.116632</doi></addata></record>
fulltext fulltext
identifier ISSN: 0923-5965
ispartof Signal processing. Image communication, 2022-04, Vol.103, p.116632, Article 116632
issn 0923-5965
1879-2677
language eng
recordid cdi_proquest_journals_2648262689
source Elsevier ScienceDirect Journals
subjects ADMM
Algorithms
Analysis-based signal model
Artificial neural networks
BPEG-M
CBPDN
Color imagery
Computer vision
Convolution analysis operator learning
Image processing
Inverse problems
Maximum strategies
Multifocus image fusion
Representations
Restarting
Teaching methods
Unsupervised learning
title Convolutional analysis operator learning for multifocus image fusion
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-28T15%3A05%3A18IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Convolutional%20analysis%20operator%20learning%20for%20multifocus%20image%20fusion&rft.jtitle=Signal%20processing.%20Image%20communication&rft.au=Zhang,%20Chengfang&rft.date=2022-04&rft.volume=103&rft.spage=116632&rft.pages=116632-&rft.artnum=116632&rft.issn=0923-5965&rft.eissn=1879-2677&rft_id=info:doi/10.1016/j.image.2022.116632&rft_dat=%3Cproquest_cross%3E2648262689%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2648262689&rft_id=info:pmid/&rft_els_id=S0923596522000030&rfr_iscdi=true