Deep Gradual Multi-Exposure Fusion via Recurrent Convolutional Network

The performance of multi-exposure image fusion (MEF) has been recently improved with deep learning techniques but there are still a couple of problems to be overcome. In this paper, we propose a novel MEF network based on recurrent neural network (RNN). Multi-exposure images have different useful in...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE access 2021-01, Vol.9, p.1-1
Hauptverfasser: Ryu, Je-Ho, Kim, Jong-Han, Kim, Jong-Ok
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 1
container_issue
container_start_page 1
container_title IEEE access
container_volume 9
creator Ryu, Je-Ho
Kim, Jong-Han
Kim, Jong-Ok
description The performance of multi-exposure image fusion (MEF) has been recently improved with deep learning techniques but there are still a couple of problems to be overcome. In this paper, we propose a novel MEF network based on recurrent neural network (RNN). Multi-exposure images have different useful information depending on their exposure levels, and in order to fuse them complementarily, we first extract the local detail and global context features of input source images, and both features are separately combined. A weight map is learned from the local features for effectively fusing according to the importance of each source image. Adopting RNN as a backbone network enables gradual fusion, where more inputs result in further improvement of the fusion gradually. Also, information can be transferred to the deeper level of the network. Experimental results show that the proposed method achieves the reduction of fusion artifacts and improves detail restoration performance, compared to conventional methods.
doi_str_mv 10.1109/ACCESS.2021.3122540
format Article
fullrecord <record><control><sourceid>proquest_ieee_</sourceid><recordid>TN_cdi_proquest_journals_2588083341</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9585082</ieee_id><doaj_id>oai_doaj_org_article_1a6a47c49139407c80fdeada2668a57c</doaj_id><sourcerecordid>2588083341</sourcerecordid><originalsourceid>FETCH-LOGICAL-c408t-a742aba5bdf9e936407cf6bc9f617c61b21a1cfd057a67dbacd9fa7dea6202183</originalsourceid><addsrcrecordid>eNpNUMlOwzAQtRBIIOgX9BKJc4qXeDtWoQWkAhLL2Zo4DkoJdbETlr_HJRViLjOamffezENoSvCMEKwv5mW5eHycUUzJjBFKeYEP0AklQueMM3H4rz5GkxjXOIVKLS5P0PLSuW12FaAeoMtuh65v88XX1schuGw5xNZvso8WsgdnhxDcps9Kv_nw3dCnSULcuf7Th9czdNRAF91kn0_R83LxVF7nq_urm3K-ym2BVZ-DLChUwKu60U4zUWBpG1FZ3QgirSAVJUBsU2MuQci6AlvrBmTtQOzeU-wU3Yy8tYe12Yb2DcK38dCa34YPLwZC39rOGQICCmkLTZje6SjcJJ4aqBAKuLSJ63zk2gb_PrjYm7UfQnoqGsqVwoqxgqQtNm7Z4GMMrvlTJdjs_Dej_2Z3oNn7n1DTEdU65_4QmiuOFWU_hKmBYw</addsrcrecordid><sourcetype>Open Website</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2588083341</pqid></control><display><type>article</type><title>Deep Gradual Multi-Exposure Fusion via Recurrent Convolutional Network</title><source>IEEE Open Access Journals</source><source>DOAJ Directory of Open Access Journals</source><source>EZB-FREE-00999 freely available EZB journals</source><creator>Ryu, Je-Ho ; Kim, Jong-Han ; Kim, Jong-Ok</creator><creatorcontrib>Ryu, Je-Ho ; Kim, Jong-Han ; Kim, Jong-Ok</creatorcontrib><description>The performance of multi-exposure image fusion (MEF) has been recently improved with deep learning techniques but there are still a couple of problems to be overcome. In this paper, we propose a novel MEF network based on recurrent neural network (RNN). Multi-exposure images have different useful information depending on their exposure levels, and in order to fuse them complementarily, we first extract the local detail and global context features of input source images, and both features are separately combined. A weight map is learned from the local features for effectively fusing according to the importance of each source image. Adopting RNN as a backbone network enables gradual fusion, where more inputs result in further improvement of the fusion gradually. Also, information can be transferred to the deeper level of the network. Experimental results show that the proposed method achieves the reduction of fusion artifacts and improves detail restoration performance, compared to conventional methods.</description><identifier>ISSN: 2169-3536</identifier><identifier>EISSN: 2169-3536</identifier><identifier>DOI: 10.1109/ACCESS.2021.3122540</identifier><identifier>CODEN: IAECCG</identifier><language>eng</language><publisher>Piscataway: IEEE</publisher><subject>Brightness ; Computer networks ; Computer vision ; Deep learning ; Dilated convolution filter ; Exposure ; Feature extraction ; Fuses ; Gradual fusion ; Image fusion ; Image processing ; Image reconstruction ; Image restoration ; Multi-exposure image fusion ; Recurrent convolutional network ; Recurrent neural networks</subject><ispartof>IEEE access, 2021-01, Vol.9, p.1-1</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2021</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c408t-a742aba5bdf9e936407cf6bc9f617c61b21a1cfd057a67dbacd9fa7dea6202183</citedby><cites>FETCH-LOGICAL-c408t-a742aba5bdf9e936407cf6bc9f617c61b21a1cfd057a67dbacd9fa7dea6202183</cites><orcidid>0000-0001-7022-2408 ; 0000-0002-0581-333X</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9585082$$EHTML$$P50$$Gieee$$Hfree_for_read</linktohtml><link.rule.ids>314,776,780,860,2096,27610,27901,27902,54908</link.rule.ids></links><search><creatorcontrib>Ryu, Je-Ho</creatorcontrib><creatorcontrib>Kim, Jong-Han</creatorcontrib><creatorcontrib>Kim, Jong-Ok</creatorcontrib><title>Deep Gradual Multi-Exposure Fusion via Recurrent Convolutional Network</title><title>IEEE access</title><addtitle>Access</addtitle><description>The performance of multi-exposure image fusion (MEF) has been recently improved with deep learning techniques but there are still a couple of problems to be overcome. In this paper, we propose a novel MEF network based on recurrent neural network (RNN). Multi-exposure images have different useful information depending on their exposure levels, and in order to fuse them complementarily, we first extract the local detail and global context features of input source images, and both features are separately combined. A weight map is learned from the local features for effectively fusing according to the importance of each source image. Adopting RNN as a backbone network enables gradual fusion, where more inputs result in further improvement of the fusion gradually. Also, information can be transferred to the deeper level of the network. Experimental results show that the proposed method achieves the reduction of fusion artifacts and improves detail restoration performance, compared to conventional methods.</description><subject>Brightness</subject><subject>Computer networks</subject><subject>Computer vision</subject><subject>Deep learning</subject><subject>Dilated convolution filter</subject><subject>Exposure</subject><subject>Feature extraction</subject><subject>Fuses</subject><subject>Gradual fusion</subject><subject>Image fusion</subject><subject>Image processing</subject><subject>Image reconstruction</subject><subject>Image restoration</subject><subject>Multi-exposure image fusion</subject><subject>Recurrent convolutional network</subject><subject>Recurrent neural networks</subject><issn>2169-3536</issn><issn>2169-3536</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>ESBDL</sourceid><sourceid>RIE</sourceid><sourceid>DOA</sourceid><recordid>eNpNUMlOwzAQtRBIIOgX9BKJc4qXeDtWoQWkAhLL2Zo4DkoJdbETlr_HJRViLjOamffezENoSvCMEKwv5mW5eHycUUzJjBFKeYEP0AklQueMM3H4rz5GkxjXOIVKLS5P0PLSuW12FaAeoMtuh65v88XX1schuGw5xNZvso8WsgdnhxDcps9Kv_nw3dCnSULcuf7Th9czdNRAF91kn0_R83LxVF7nq_urm3K-ym2BVZ-DLChUwKu60U4zUWBpG1FZ3QgirSAVJUBsU2MuQci6AlvrBmTtQOzeU-wU3Yy8tYe12Yb2DcK38dCa34YPLwZC39rOGQICCmkLTZje6SjcJJ4aqBAKuLSJ63zk2gb_PrjYm7UfQnoqGsqVwoqxgqQtNm7Z4GMMrvlTJdjs_Dej_2Z3oNn7n1DTEdU65_4QmiuOFWU_hKmBYw</recordid><startdate>20210101</startdate><enddate>20210101</enddate><creator>Ryu, Je-Ho</creator><creator>Kim, Jong-Han</creator><creator>Kim, Jong-Ok</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>ESBDL</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>7SR</scope><scope>8BQ</scope><scope>8FD</scope><scope>JG9</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>DOA</scope><orcidid>https://orcid.org/0000-0001-7022-2408</orcidid><orcidid>https://orcid.org/0000-0002-0581-333X</orcidid></search><sort><creationdate>20210101</creationdate><title>Deep Gradual Multi-Exposure Fusion via Recurrent Convolutional Network</title><author>Ryu, Je-Ho ; Kim, Jong-Han ; Kim, Jong-Ok</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c408t-a742aba5bdf9e936407cf6bc9f617c61b21a1cfd057a67dbacd9fa7dea6202183</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Brightness</topic><topic>Computer networks</topic><topic>Computer vision</topic><topic>Deep learning</topic><topic>Dilated convolution filter</topic><topic>Exposure</topic><topic>Feature extraction</topic><topic>Fuses</topic><topic>Gradual fusion</topic><topic>Image fusion</topic><topic>Image processing</topic><topic>Image reconstruction</topic><topic>Image restoration</topic><topic>Multi-exposure image fusion</topic><topic>Recurrent convolutional network</topic><topic>Recurrent neural networks</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Ryu, Je-Ho</creatorcontrib><creatorcontrib>Kim, Jong-Han</creatorcontrib><creatorcontrib>Kim, Jong-Ok</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE Open Access Journals</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Engineered Materials Abstracts</collection><collection>METADEX</collection><collection>Technology Research Database</collection><collection>Materials Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>DOAJ Directory of Open Access Journals</collection><jtitle>IEEE access</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Ryu, Je-Ho</au><au>Kim, Jong-Han</au><au>Kim, Jong-Ok</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Deep Gradual Multi-Exposure Fusion via Recurrent Convolutional Network</atitle><jtitle>IEEE access</jtitle><stitle>Access</stitle><date>2021-01-01</date><risdate>2021</risdate><volume>9</volume><spage>1</spage><epage>1</epage><pages>1-1</pages><issn>2169-3536</issn><eissn>2169-3536</eissn><coden>IAECCG</coden><abstract>The performance of multi-exposure image fusion (MEF) has been recently improved with deep learning techniques but there are still a couple of problems to be overcome. In this paper, we propose a novel MEF network based on recurrent neural network (RNN). Multi-exposure images have different useful information depending on their exposure levels, and in order to fuse them complementarily, we first extract the local detail and global context features of input source images, and both features are separately combined. A weight map is learned from the local features for effectively fusing according to the importance of each source image. Adopting RNN as a backbone network enables gradual fusion, where more inputs result in further improvement of the fusion gradually. Also, information can be transferred to the deeper level of the network. Experimental results show that the proposed method achieves the reduction of fusion artifacts and improves detail restoration performance, compared to conventional methods.</abstract><cop>Piscataway</cop><pub>IEEE</pub><doi>10.1109/ACCESS.2021.3122540</doi><tpages>1</tpages><orcidid>https://orcid.org/0000-0001-7022-2408</orcidid><orcidid>https://orcid.org/0000-0002-0581-333X</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 2169-3536
ispartof IEEE access, 2021-01, Vol.9, p.1-1
issn 2169-3536
2169-3536
language eng
recordid cdi_proquest_journals_2588083341
source IEEE Open Access Journals; DOAJ Directory of Open Access Journals; EZB-FREE-00999 freely available EZB journals
subjects Brightness
Computer networks
Computer vision
Deep learning
Dilated convolution filter
Exposure
Feature extraction
Fuses
Gradual fusion
Image fusion
Image processing
Image reconstruction
Image restoration
Multi-exposure image fusion
Recurrent convolutional network
Recurrent neural networks
title Deep Gradual Multi-Exposure Fusion via Recurrent Convolutional Network
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-05T16%3A12%3A23IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_ieee_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Deep%20Gradual%20Multi-Exposure%20Fusion%20via%20Recurrent%20Convolutional%20Network&rft.jtitle=IEEE%20access&rft.au=Ryu,%20Je-Ho&rft.date=2021-01-01&rft.volume=9&rft.spage=1&rft.epage=1&rft.pages=1-1&rft.issn=2169-3536&rft.eissn=2169-3536&rft.coden=IAECCG&rft_id=info:doi/10.1109/ACCESS.2021.3122540&rft_dat=%3Cproquest_ieee_%3E2588083341%3C/proquest_ieee_%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2588083341&rft_id=info:pmid/&rft_ieee_id=9585082&rft_doaj_id=oai_doaj_org_article_1a6a47c49139407c80fdeada2668a57c&rfr_iscdi=true