Image Segmentation-Based Multi-Focus Image Fusion Through Multi-Scale Convolutional Neural Network

A decision map contains complete and clear information about the image to be fused, and detecting the decision map is crucial to various image fusion issues, especially multi-focus image fusion. Nevertheless, in an attempt to obtain an approving image fusion effect, it is necessary and always diffic...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE access 2017-01, Vol.5, p.15750-15761
Hauptverfasser: Du, Chaoben, Gao, Shesheng
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 15761
container_issue
container_start_page 15750
container_title IEEE access
container_volume 5
creator Du, Chaoben
Gao, Shesheng
description A decision map contains complete and clear information about the image to be fused, and detecting the decision map is crucial to various image fusion issues, especially multi-focus image fusion. Nevertheless, in an attempt to obtain an approving image fusion effect, it is necessary and always difficult to obtain a decision map. In this paper, we address this problem with a novel image segmentation-based multi-focus image fusion algorithm, in which the task of detecting the decision map is treated as image segmentation between the focused and defocused regions in the source images. The proposed method achieves segmentation through a multi-scale convolutional neural network, which performs a multi-scale analysis on each input image to derive the respective feature maps on the region boundaries between the focused and defocused regions. The feature maps are then inter-fused to produce a fused feature map. Afterward, the fused map is post-processed using initial segmentation, morphological operation, and watershed to obtain the segmentation map/decision map. We illustrate that the decision map gained from the multi-scale convolutional neural network is trustworthy and that it can lead to high-quality fusion results. Experimental results evidently validate that the proposed algorithm can achieve an optimum fusion performance in light of both qualitative and quantitative evaluations.
doi_str_mv 10.1109/ACCESS.2017.2735019
format Article
fullrecord <record><control><sourceid>proquest_ieee_</sourceid><recordid>TN_cdi_proquest_journals_2455946307</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>8000301</ieee_id><doaj_id>oai_doaj_org_article_c69c9387031447859ff5d343ac22e4b3</doaj_id><sourcerecordid>2455946307</sourcerecordid><originalsourceid>FETCH-LOGICAL-c408t-b11c51898ffb1ad7fa4e2f185282090f68dfb4af02a341777bc8ac06f0f393ad3</originalsourceid><addsrcrecordid>eNpNkc1OwzAQhCMEEgh4gl4icU7xb2wfIWqhUoFDy9naOHabktbFTkC8PWlTIfayq_XMrKwvSUYYjTFG6v6hKCaLxZggLMZEUI6wOkuuCM5VRjnNz__Nl8ltjBvUl-xXXFwl5WwLK5su7Gprdy20td9ljxBtlb50TVtnU2-6mA6iaRf753S5Dr5brU-ChYHGpoXfffmmO9ihSV9tF46t_fbh4ya5cNBEe3vq18n7dLIsnrP529OseJhnhiHZZiXGhmOppHMlhko4YJY4LDmRBCnkclm5koFDBCjDQojSSDAod8hRRaGi18lsyK08bPQ-1FsIP9pDrY8LH1YaQlubxmqTK6OoFIhixoTkyjleUUbBEGJZSfusuyFrH_xnZ2OrN74L_d-iJoxzxXKKRK-ig8oEH2Ow7u8qRvrARg9s9IGNPrHpXaPBVVtr_xyyh0IRpr8_pYod</addsrcrecordid><sourcetype>Open Website</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2455946307</pqid></control><display><type>article</type><title>Image Segmentation-Based Multi-Focus Image Fusion Through Multi-Scale Convolutional Neural Network</title><source>IEEE Open Access Journals</source><source>DOAJ Directory of Open Access Journals</source><source>EZB-FREE-00999 freely available EZB journals</source><creator>Du, Chaoben ; Gao, Shesheng</creator><creatorcontrib>Du, Chaoben ; Gao, Shesheng</creatorcontrib><description>A decision map contains complete and clear information about the image to be fused, and detecting the decision map is crucial to various image fusion issues, especially multi-focus image fusion. Nevertheless, in an attempt to obtain an approving image fusion effect, it is necessary and always difficult to obtain a decision map. In this paper, we address this problem with a novel image segmentation-based multi-focus image fusion algorithm, in which the task of detecting the decision map is treated as image segmentation between the focused and defocused regions in the source images. The proposed method achieves segmentation through a multi-scale convolutional neural network, which performs a multi-scale analysis on each input image to derive the respective feature maps on the region boundaries between the focused and defocused regions. The feature maps are then inter-fused to produce a fused feature map. Afterward, the fused map is post-processed using initial segmentation, morphological operation, and watershed to obtain the segmentation map/decision map. We illustrate that the decision map gained from the multi-scale convolutional neural network is trustworthy and that it can lead to high-quality fusion results. Experimental results evidently validate that the proposed algorithm can achieve an optimum fusion performance in light of both qualitative and quantitative evaluations.</description><identifier>ISSN: 2169-3536</identifier><identifier>EISSN: 2169-3536</identifier><identifier>DOI: 10.1109/ACCESS.2017.2735019</identifier><identifier>CODEN: IAECCG</identifier><language>eng</language><publisher>Piscataway: IEEE</publisher><subject>Algorithm design and analysis ; Algorithms ; Artificial neural networks ; Computer vision ; Convolution ; Convolutional neural network ; decision map ; Feature maps ; Image fusion ; Image processing ; Image segmentation ; Morphological operations ; multi-focus image ; Multiscale analysis ; Neural networks ; Transforms</subject><ispartof>IEEE access, 2017-01, Vol.5, p.15750-15761</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2017</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c408t-b11c51898ffb1ad7fa4e2f185282090f68dfb4af02a341777bc8ac06f0f393ad3</citedby><cites>FETCH-LOGICAL-c408t-b11c51898ffb1ad7fa4e2f185282090f68dfb4af02a341777bc8ac06f0f393ad3</cites><orcidid>0000-0003-0076-9197</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/8000301$$EHTML$$P50$$Gieee$$Hfree_for_read</linktohtml><link.rule.ids>314,780,784,864,2102,27633,27924,27925,54933</link.rule.ids></links><search><creatorcontrib>Du, Chaoben</creatorcontrib><creatorcontrib>Gao, Shesheng</creatorcontrib><title>Image Segmentation-Based Multi-Focus Image Fusion Through Multi-Scale Convolutional Neural Network</title><title>IEEE access</title><addtitle>Access</addtitle><description>A decision map contains complete and clear information about the image to be fused, and detecting the decision map is crucial to various image fusion issues, especially multi-focus image fusion. Nevertheless, in an attempt to obtain an approving image fusion effect, it is necessary and always difficult to obtain a decision map. In this paper, we address this problem with a novel image segmentation-based multi-focus image fusion algorithm, in which the task of detecting the decision map is treated as image segmentation between the focused and defocused regions in the source images. The proposed method achieves segmentation through a multi-scale convolutional neural network, which performs a multi-scale analysis on each input image to derive the respective feature maps on the region boundaries between the focused and defocused regions. The feature maps are then inter-fused to produce a fused feature map. Afterward, the fused map is post-processed using initial segmentation, morphological operation, and watershed to obtain the segmentation map/decision map. We illustrate that the decision map gained from the multi-scale convolutional neural network is trustworthy and that it can lead to high-quality fusion results. Experimental results evidently validate that the proposed algorithm can achieve an optimum fusion performance in light of both qualitative and quantitative evaluations.</description><subject>Algorithm design and analysis</subject><subject>Algorithms</subject><subject>Artificial neural networks</subject><subject>Computer vision</subject><subject>Convolution</subject><subject>Convolutional neural network</subject><subject>decision map</subject><subject>Feature maps</subject><subject>Image fusion</subject><subject>Image processing</subject><subject>Image segmentation</subject><subject>Morphological operations</subject><subject>multi-focus image</subject><subject>Multiscale analysis</subject><subject>Neural networks</subject><subject>Transforms</subject><issn>2169-3536</issn><issn>2169-3536</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2017</creationdate><recordtype>article</recordtype><sourceid>ESBDL</sourceid><sourceid>RIE</sourceid><sourceid>DOA</sourceid><recordid>eNpNkc1OwzAQhCMEEgh4gl4icU7xb2wfIWqhUoFDy9naOHabktbFTkC8PWlTIfayq_XMrKwvSUYYjTFG6v6hKCaLxZggLMZEUI6wOkuuCM5VRjnNz__Nl8ltjBvUl-xXXFwl5WwLK5su7Gprdy20td9ljxBtlb50TVtnU2-6mA6iaRf753S5Dr5brU-ChYHGpoXfffmmO9ihSV9tF46t_fbh4ya5cNBEe3vq18n7dLIsnrP529OseJhnhiHZZiXGhmOppHMlhko4YJY4LDmRBCnkclm5koFDBCjDQojSSDAod8hRRaGi18lsyK08bPQ-1FsIP9pDrY8LH1YaQlubxmqTK6OoFIhixoTkyjleUUbBEGJZSfusuyFrH_xnZ2OrN74L_d-iJoxzxXKKRK-ig8oEH2Ow7u8qRvrARg9s9IGNPrHpXaPBVVtr_xyyh0IRpr8_pYod</recordid><startdate>20170101</startdate><enddate>20170101</enddate><creator>Du, Chaoben</creator><creator>Gao, Shesheng</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>ESBDL</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>7SR</scope><scope>8BQ</scope><scope>8FD</scope><scope>JG9</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>DOA</scope><orcidid>https://orcid.org/0000-0003-0076-9197</orcidid></search><sort><creationdate>20170101</creationdate><title>Image Segmentation-Based Multi-Focus Image Fusion Through Multi-Scale Convolutional Neural Network</title><author>Du, Chaoben ; Gao, Shesheng</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c408t-b11c51898ffb1ad7fa4e2f185282090f68dfb4af02a341777bc8ac06f0f393ad3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2017</creationdate><topic>Algorithm design and analysis</topic><topic>Algorithms</topic><topic>Artificial neural networks</topic><topic>Computer vision</topic><topic>Convolution</topic><topic>Convolutional neural network</topic><topic>decision map</topic><topic>Feature maps</topic><topic>Image fusion</topic><topic>Image processing</topic><topic>Image segmentation</topic><topic>Morphological operations</topic><topic>multi-focus image</topic><topic>Multiscale analysis</topic><topic>Neural networks</topic><topic>Transforms</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Du, Chaoben</creatorcontrib><creatorcontrib>Gao, Shesheng</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE Open Access Journals</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Engineered Materials Abstracts</collection><collection>METADEX</collection><collection>Technology Research Database</collection><collection>Materials Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>DOAJ Directory of Open Access Journals</collection><jtitle>IEEE access</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Du, Chaoben</au><au>Gao, Shesheng</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Image Segmentation-Based Multi-Focus Image Fusion Through Multi-Scale Convolutional Neural Network</atitle><jtitle>IEEE access</jtitle><stitle>Access</stitle><date>2017-01-01</date><risdate>2017</risdate><volume>5</volume><spage>15750</spage><epage>15761</epage><pages>15750-15761</pages><issn>2169-3536</issn><eissn>2169-3536</eissn><coden>IAECCG</coden><abstract>A decision map contains complete and clear information about the image to be fused, and detecting the decision map is crucial to various image fusion issues, especially multi-focus image fusion. Nevertheless, in an attempt to obtain an approving image fusion effect, it is necessary and always difficult to obtain a decision map. In this paper, we address this problem with a novel image segmentation-based multi-focus image fusion algorithm, in which the task of detecting the decision map is treated as image segmentation between the focused and defocused regions in the source images. The proposed method achieves segmentation through a multi-scale convolutional neural network, which performs a multi-scale analysis on each input image to derive the respective feature maps on the region boundaries between the focused and defocused regions. The feature maps are then inter-fused to produce a fused feature map. Afterward, the fused map is post-processed using initial segmentation, morphological operation, and watershed to obtain the segmentation map/decision map. We illustrate that the decision map gained from the multi-scale convolutional neural network is trustworthy and that it can lead to high-quality fusion results. Experimental results evidently validate that the proposed algorithm can achieve an optimum fusion performance in light of both qualitative and quantitative evaluations.</abstract><cop>Piscataway</cop><pub>IEEE</pub><doi>10.1109/ACCESS.2017.2735019</doi><tpages>12</tpages><orcidid>https://orcid.org/0000-0003-0076-9197</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 2169-3536
ispartof IEEE access, 2017-01, Vol.5, p.15750-15761
issn 2169-3536
2169-3536
language eng
recordid cdi_proquest_journals_2455946307
source IEEE Open Access Journals; DOAJ Directory of Open Access Journals; EZB-FREE-00999 freely available EZB journals
subjects Algorithm design and analysis
Algorithms
Artificial neural networks
Computer vision
Convolution
Convolutional neural network
decision map
Feature maps
Image fusion
Image processing
Image segmentation
Morphological operations
multi-focus image
Multiscale analysis
Neural networks
Transforms
title Image Segmentation-Based Multi-Focus Image Fusion Through Multi-Scale Convolutional Neural Network
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-26T23%3A01%3A14IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_ieee_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Image%20Segmentation-Based%20Multi-Focus%20Image%20Fusion%20Through%20Multi-Scale%20Convolutional%20Neural%20Network&rft.jtitle=IEEE%20access&rft.au=Du,%20Chaoben&rft.date=2017-01-01&rft.volume=5&rft.spage=15750&rft.epage=15761&rft.pages=15750-15761&rft.issn=2169-3536&rft.eissn=2169-3536&rft.coden=IAECCG&rft_id=info:doi/10.1109/ACCESS.2017.2735019&rft_dat=%3Cproquest_ieee_%3E2455946307%3C/proquest_ieee_%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2455946307&rft_id=info:pmid/&rft_ieee_id=8000301&rft_doaj_id=oai_doaj_org_article_c69c9387031447859ff5d343ac22e4b3&rfr_iscdi=true