IFT: Image Fusion Transformer for Ghost-free High Dynamic Range Imaging

Multi-frame high dynamic range (HDR) imaging aims to reconstruct ghost-free images with photo-realistic details from content-complementary but spatially misaligned low dynamic range (LDR) images. Existing HDR algorithms are prone to producing ghosting artifacts as their methods fail to capture long-...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Wang, Hailing, Li, Wei, Xi, Yuanyuan, Hu, Jie, Chen, Hanting, Li, Longyu, Wang, Yunhe
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Wang, Hailing
Li, Wei
Xi, Yuanyuan
Hu, Jie
Chen, Hanting
Li, Longyu
Wang, Yunhe
description Multi-frame high dynamic range (HDR) imaging aims to reconstruct ghost-free images with photo-realistic details from content-complementary but spatially misaligned low dynamic range (LDR) images. Existing HDR algorithms are prone to producing ghosting artifacts as their methods fail to capture long-range dependencies between LDR frames with large motion in dynamic scenes. To address this issue, we propose a novel image fusion transformer, referred to as IFT, which presents a fast global patch searching (FGPS) module followed by a self-cross fusion module (SCF) for ghost-free HDR imaging. The FGPS searches the patches from supporting frames that have the closest dependency to each patch of the reference frame for long-range dependency modeling, while the SCF conducts intra-frame and inter-frame feature fusion on the patches obtained by the FGPS with linear complexity to input resolution. By matching similar patches between frames, objects with large motion ranges in dynamic scenes can be aligned, which can effectively alleviate the generation of artifacts. In addition, the proposed FGPS and SCF can be integrated into various deep HDR methods as efficient plug-in modules. Extensive experiments on multiple benchmarks show that our method achieves state-of-the-art performance both quantitatively and qualitatively.
doi_str_mv 10.48550/arxiv.2309.15019
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2309_15019</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2309_15019</sourcerecordid><originalsourceid>FETCH-LOGICAL-a679-3db00fafa176945e2f1f57bd25571e4789e5864970629f36585ec5ed4178bb293</originalsourceid><addsrcrecordid>eNotj0tOwzAUAL1hgVoOwApfIMG_5093qJA0UiUklH3kNM-pJeIgBxC9PW1hNasZaQi556xUFoA9-vwTv0shmSs5MO5uSd1U7YY2kx-RVl9LnBNts09LmPOEmZ5B6-O8fBYhI9JdHI_0-ZT8FA_0zaezdFFjGtfkJvj3Be_-uSJt9dJud8X-tW62T_vCa-MKOfSMBR88N9opQBF4ANMPAsBwVMY6BKuVM0wLF6QGC3gAHBQ3tu-Fkyvy8Je9nnQfOU4-n7rLUXc9kr9DzkRm</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>IFT: Image Fusion Transformer for Ghost-free High Dynamic Range Imaging</title><source>arXiv.org</source><creator>Wang, Hailing ; Li, Wei ; Xi, Yuanyuan ; Hu, Jie ; Chen, Hanting ; Li, Longyu ; Wang, Yunhe</creator><creatorcontrib>Wang, Hailing ; Li, Wei ; Xi, Yuanyuan ; Hu, Jie ; Chen, Hanting ; Li, Longyu ; Wang, Yunhe</creatorcontrib><description>Multi-frame high dynamic range (HDR) imaging aims to reconstruct ghost-free images with photo-realistic details from content-complementary but spatially misaligned low dynamic range (LDR) images. Existing HDR algorithms are prone to producing ghosting artifacts as their methods fail to capture long-range dependencies between LDR frames with large motion in dynamic scenes. To address this issue, we propose a novel image fusion transformer, referred to as IFT, which presents a fast global patch searching (FGPS) module followed by a self-cross fusion module (SCF) for ghost-free HDR imaging. The FGPS searches the patches from supporting frames that have the closest dependency to each patch of the reference frame for long-range dependency modeling, while the SCF conducts intra-frame and inter-frame feature fusion on the patches obtained by the FGPS with linear complexity to input resolution. By matching similar patches between frames, objects with large motion ranges in dynamic scenes can be aligned, which can effectively alleviate the generation of artifacts. In addition, the proposed FGPS and SCF can be integrated into various deep HDR methods as efficient plug-in modules. Extensive experiments on multiple benchmarks show that our method achieves state-of-the-art performance both quantitatively and qualitatively.</description><identifier>DOI: 10.48550/arxiv.2309.15019</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition</subject><creationdate>2023-09</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2309.15019$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2309.15019$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Wang, Hailing</creatorcontrib><creatorcontrib>Li, Wei</creatorcontrib><creatorcontrib>Xi, Yuanyuan</creatorcontrib><creatorcontrib>Hu, Jie</creatorcontrib><creatorcontrib>Chen, Hanting</creatorcontrib><creatorcontrib>Li, Longyu</creatorcontrib><creatorcontrib>Wang, Yunhe</creatorcontrib><title>IFT: Image Fusion Transformer for Ghost-free High Dynamic Range Imaging</title><description>Multi-frame high dynamic range (HDR) imaging aims to reconstruct ghost-free images with photo-realistic details from content-complementary but spatially misaligned low dynamic range (LDR) images. Existing HDR algorithms are prone to producing ghosting artifacts as their methods fail to capture long-range dependencies between LDR frames with large motion in dynamic scenes. To address this issue, we propose a novel image fusion transformer, referred to as IFT, which presents a fast global patch searching (FGPS) module followed by a self-cross fusion module (SCF) for ghost-free HDR imaging. The FGPS searches the patches from supporting frames that have the closest dependency to each patch of the reference frame for long-range dependency modeling, while the SCF conducts intra-frame and inter-frame feature fusion on the patches obtained by the FGPS with linear complexity to input resolution. By matching similar patches between frames, objects with large motion ranges in dynamic scenes can be aligned, which can effectively alleviate the generation of artifacts. In addition, the proposed FGPS and SCF can be integrated into various deep HDR methods as efficient plug-in modules. Extensive experiments on multiple benchmarks show that our method achieves state-of-the-art performance both quantitatively and qualitatively.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj0tOwzAUAL1hgVoOwApfIMG_5093qJA0UiUklH3kNM-pJeIgBxC9PW1hNasZaQi556xUFoA9-vwTv0shmSs5MO5uSd1U7YY2kx-RVl9LnBNts09LmPOEmZ5B6-O8fBYhI9JdHI_0-ZT8FA_0zaezdFFjGtfkJvj3Be_-uSJt9dJud8X-tW62T_vCa-MKOfSMBR88N9opQBF4ANMPAsBwVMY6BKuVM0wLF6QGC3gAHBQ3tu-Fkyvy8Je9nnQfOU4-n7rLUXc9kr9DzkRm</recordid><startdate>20230926</startdate><enddate>20230926</enddate><creator>Wang, Hailing</creator><creator>Li, Wei</creator><creator>Xi, Yuanyuan</creator><creator>Hu, Jie</creator><creator>Chen, Hanting</creator><creator>Li, Longyu</creator><creator>Wang, Yunhe</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20230926</creationdate><title>IFT: Image Fusion Transformer for Ghost-free High Dynamic Range Imaging</title><author>Wang, Hailing ; Li, Wei ; Xi, Yuanyuan ; Hu, Jie ; Chen, Hanting ; Li, Longyu ; Wang, Yunhe</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a679-3db00fafa176945e2f1f57bd25571e4789e5864970629f36585ec5ed4178bb293</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Wang, Hailing</creatorcontrib><creatorcontrib>Li, Wei</creatorcontrib><creatorcontrib>Xi, Yuanyuan</creatorcontrib><creatorcontrib>Hu, Jie</creatorcontrib><creatorcontrib>Chen, Hanting</creatorcontrib><creatorcontrib>Li, Longyu</creatorcontrib><creatorcontrib>Wang, Yunhe</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Wang, Hailing</au><au>Li, Wei</au><au>Xi, Yuanyuan</au><au>Hu, Jie</au><au>Chen, Hanting</au><au>Li, Longyu</au><au>Wang, Yunhe</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>IFT: Image Fusion Transformer for Ghost-free High Dynamic Range Imaging</atitle><date>2023-09-26</date><risdate>2023</risdate><abstract>Multi-frame high dynamic range (HDR) imaging aims to reconstruct ghost-free images with photo-realistic details from content-complementary but spatially misaligned low dynamic range (LDR) images. Existing HDR algorithms are prone to producing ghosting artifacts as their methods fail to capture long-range dependencies between LDR frames with large motion in dynamic scenes. To address this issue, we propose a novel image fusion transformer, referred to as IFT, which presents a fast global patch searching (FGPS) module followed by a self-cross fusion module (SCF) for ghost-free HDR imaging. The FGPS searches the patches from supporting frames that have the closest dependency to each patch of the reference frame for long-range dependency modeling, while the SCF conducts intra-frame and inter-frame feature fusion on the patches obtained by the FGPS with linear complexity to input resolution. By matching similar patches between frames, objects with large motion ranges in dynamic scenes can be aligned, which can effectively alleviate the generation of artifacts. In addition, the proposed FGPS and SCF can be integrated into various deep HDR methods as efficient plug-in modules. Extensive experiments on multiple benchmarks show that our method achieves state-of-the-art performance both quantitatively and qualitatively.</abstract><doi>10.48550/arxiv.2309.15019</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2309.15019
ispartof
issn
language eng
recordid cdi_arxiv_primary_2309_15019
source arXiv.org
subjects Computer Science - Computer Vision and Pattern Recognition
title IFT: Image Fusion Transformer for Ghost-free High Dynamic Range Imaging
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-07T06%3A02%3A15IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=IFT:%20Image%20Fusion%20Transformer%20for%20Ghost-free%20High%20Dynamic%20Range%20Imaging&rft.au=Wang,%20Hailing&rft.date=2023-09-26&rft_id=info:doi/10.48550/arxiv.2309.15019&rft_dat=%3Carxiv_GOX%3E2309_15019%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true