A bi‐stream transformer for single‐image dehazing
Deep‐learning methods, such as encoder–decoder networks, have achieved impressive results in image dehazing. However, these methods often rely only on synthesized data for training that limits their generalizability to hazy, real‐world images. To leverage prior knowledge of haze properties, we propo...
Gespeichert in:
Veröffentlicht in: | ETRI journal 2024-11 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | ETRI journal |
container_volume | |
creator | Wang, Mingrui Yan, Jinqiang Wan, Chaoying Yang, Guowei Yu, Teng |
description | Deep‐learning methods, such as encoder–decoder networks, have achieved impressive results in image dehazing. However, these methods often rely only on synthesized data for training that limits their generalizability to hazy, real‐world images. To leverage prior knowledge of haze properties, we propose a bi‐encoder structure that integrates a prior‐based encoder into a traditional encoder–decoder network. The features from both encoders were fused using a feature enhancement module. We adopted transformer blocks instead of convolutions to model local feature associations. Experimental results demonstrate that our method surpasses state‐of‐the‐art methods for synthesized and actual hazy scenes. Therefore, we believe that our method will be a useful supplement to the collection of current artificial intelligence models and will benefit engineering applications in computer vision. |
doi_str_mv | 10.4218/etrij.2024-0037 |
format | Article |
fullrecord | <record><control><sourceid>crossref</sourceid><recordid>TN_cdi_crossref_primary_10_4218_etrij_2024_0037</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>10_4218_etrij_2024_0037</sourcerecordid><originalsourceid>FETCH-LOGICAL-c125t-fab4872589a2c89d07a77316a4ca0031e717e45ce527e52615943a52e2bddce53</originalsourceid><addsrcrecordid>eNotj01OwzAUhC1EJULLmq0v4NZ-tuNkWVX8VKrEBtbWi_NSUjUtsrOBFUfgjJwEB1iMRpqRRvMxdqvk0oCqVjTG_rAECUZIqd0FKwC0Fk5DeckKBWBFaUp9xa5TOkgJ0tiqYHbNm_778yuNkXDgY8RT6s5xoMiz8dSf9kfKfT_gnnhLr_iRowWbdXhMdPPvc_Zyf_e8eRS7p4ftZr0TQYEdRYeNqRzYqkYIVd1Kh85pVaIJmD8qcsqRsYEsuKxS2dpotEDQtG1O9Zyt_nZDPKcUqfNvMT-J715JP1H7X2o_UfuJWv8Ai9lNFQ</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>A bi‐stream transformer for single‐image dehazing</title><source>DOAJ Directory of Open Access Journals</source><source>Wiley Online Library Free Content</source><source>EZB-FREE-00999 freely available EZB journals</source><creator>Wang, Mingrui ; Yan, Jinqiang ; Wan, Chaoying ; Yang, Guowei ; Yu, Teng</creator><creatorcontrib>Wang, Mingrui ; Yan, Jinqiang ; Wan, Chaoying ; Yang, Guowei ; Yu, Teng</creatorcontrib><description>Deep‐learning methods, such as encoder–decoder networks, have achieved impressive results in image dehazing. However, these methods often rely only on synthesized data for training that limits their generalizability to hazy, real‐world images. To leverage prior knowledge of haze properties, we propose a bi‐encoder structure that integrates a prior‐based encoder into a traditional encoder–decoder network. The features from both encoders were fused using a feature enhancement module. We adopted transformer blocks instead of convolutions to model local feature associations. Experimental results demonstrate that our method surpasses state‐of‐the‐art methods for synthesized and actual hazy scenes. Therefore, we believe that our method will be a useful supplement to the collection of current artificial intelligence models and will benefit engineering applications in computer vision.</description><identifier>ISSN: 1225-6463</identifier><identifier>EISSN: 2233-7326</identifier><identifier>DOI: 10.4218/etrij.2024-0037</identifier><language>eng</language><ispartof>ETRI journal, 2024-11</ispartof><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c125t-fab4872589a2c89d07a77316a4ca0031e717e45ce527e52615943a52e2bddce53</cites><orcidid>0009-0005-7656-3197</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,776,780,860,27903,27904</link.rule.ids></links><search><creatorcontrib>Wang, Mingrui</creatorcontrib><creatorcontrib>Yan, Jinqiang</creatorcontrib><creatorcontrib>Wan, Chaoying</creatorcontrib><creatorcontrib>Yang, Guowei</creatorcontrib><creatorcontrib>Yu, Teng</creatorcontrib><title>A bi‐stream transformer for single‐image dehazing</title><title>ETRI journal</title><description>Deep‐learning methods, such as encoder–decoder networks, have achieved impressive results in image dehazing. However, these methods often rely only on synthesized data for training that limits their generalizability to hazy, real‐world images. To leverage prior knowledge of haze properties, we propose a bi‐encoder structure that integrates a prior‐based encoder into a traditional encoder–decoder network. The features from both encoders were fused using a feature enhancement module. We adopted transformer blocks instead of convolutions to model local feature associations. Experimental results demonstrate that our method surpasses state‐of‐the‐art methods for synthesized and actual hazy scenes. Therefore, we believe that our method will be a useful supplement to the collection of current artificial intelligence models and will benefit engineering applications in computer vision.</description><issn>1225-6463</issn><issn>2233-7326</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><recordid>eNotj01OwzAUhC1EJULLmq0v4NZ-tuNkWVX8VKrEBtbWi_NSUjUtsrOBFUfgjJwEB1iMRpqRRvMxdqvk0oCqVjTG_rAECUZIqd0FKwC0Fk5DeckKBWBFaUp9xa5TOkgJ0tiqYHbNm_778yuNkXDgY8RT6s5xoMiz8dSf9kfKfT_gnnhLr_iRowWbdXhMdPPvc_Zyf_e8eRS7p4ftZr0TQYEdRYeNqRzYqkYIVd1Kh85pVaIJmD8qcsqRsYEsuKxS2dpotEDQtG1O9Zyt_nZDPKcUqfNvMT-J715JP1H7X2o_UfuJWv8Ai9lNFQ</recordid><startdate>20241122</startdate><enddate>20241122</enddate><creator>Wang, Mingrui</creator><creator>Yan, Jinqiang</creator><creator>Wan, Chaoying</creator><creator>Yang, Guowei</creator><creator>Yu, Teng</creator><scope>AAYXX</scope><scope>CITATION</scope><orcidid>https://orcid.org/0009-0005-7656-3197</orcidid></search><sort><creationdate>20241122</creationdate><title>A bi‐stream transformer for single‐image dehazing</title><author>Wang, Mingrui ; Yan, Jinqiang ; Wan, Chaoying ; Yang, Guowei ; Yu, Teng</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c125t-fab4872589a2c89d07a77316a4ca0031e717e45ce527e52615943a52e2bddce53</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Wang, Mingrui</creatorcontrib><creatorcontrib>Yan, Jinqiang</creatorcontrib><creatorcontrib>Wan, Chaoying</creatorcontrib><creatorcontrib>Yang, Guowei</creatorcontrib><creatorcontrib>Yu, Teng</creatorcontrib><collection>CrossRef</collection><jtitle>ETRI journal</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Wang, Mingrui</au><au>Yan, Jinqiang</au><au>Wan, Chaoying</au><au>Yang, Guowei</au><au>Yu, Teng</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>A bi‐stream transformer for single‐image dehazing</atitle><jtitle>ETRI journal</jtitle><date>2024-11-22</date><risdate>2024</risdate><issn>1225-6463</issn><eissn>2233-7326</eissn><abstract>Deep‐learning methods, such as encoder–decoder networks, have achieved impressive results in image dehazing. However, these methods often rely only on synthesized data for training that limits their generalizability to hazy, real‐world images. To leverage prior knowledge of haze properties, we propose a bi‐encoder structure that integrates a prior‐based encoder into a traditional encoder–decoder network. The features from both encoders were fused using a feature enhancement module. We adopted transformer blocks instead of convolutions to model local feature associations. Experimental results demonstrate that our method surpasses state‐of‐the‐art methods for synthesized and actual hazy scenes. Therefore, we believe that our method will be a useful supplement to the collection of current artificial intelligence models and will benefit engineering applications in computer vision.</abstract><doi>10.4218/etrij.2024-0037</doi><orcidid>https://orcid.org/0009-0005-7656-3197</orcidid></addata></record> |
fulltext | fulltext |
identifier | ISSN: 1225-6463 |
ispartof | ETRI journal, 2024-11 |
issn | 1225-6463 2233-7326 |
language | eng |
recordid | cdi_crossref_primary_10_4218_etrij_2024_0037 |
source | DOAJ Directory of Open Access Journals; Wiley Online Library Free Content; EZB-FREE-00999 freely available EZB journals |
title | A bi‐stream transformer for single‐image dehazing |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-27T16%3A20%3A53IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-crossref&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=A%20bi%E2%80%90stream%20transformer%20for%20single%E2%80%90image%20dehazing&rft.jtitle=ETRI%20journal&rft.au=Wang,%20Mingrui&rft.date=2024-11-22&rft.issn=1225-6463&rft.eissn=2233-7326&rft_id=info:doi/10.4218/etrij.2024-0037&rft_dat=%3Ccrossref%3E10_4218_etrij_2024_0037%3C/crossref%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |