Towards domain adaptation underwater image enhancement and restoration
Currently, deep convolutional neural networks have made significant research progress in the field of underwater image enhancement and restoration. However, most of the existing methods use fixed-scale convolutional kernels, which are easily overfitted in practice, resulting in poor domain adaptatio...
Gespeichert in:
Veröffentlicht in: | Multimedia systems 2024-04, Vol.30 (2), Article 62 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | 2 |
container_start_page | |
container_title | Multimedia systems |
container_volume | 30 |
creator | Yang, Chao Jiang, Longyu Li, Zhicheng Huang, Jianxing |
description | Currently, deep convolutional neural networks have made significant research progress in the field of underwater image enhancement and restoration. However, most of the existing methods use fixed-scale convolutional kernels, which are easily overfitted in practice, resulting in poor domain adaptation. Therefore, in this paper, we propose an underwater image enhancement and restoration network based on an encoder and decoder framework that focuses on extracting generic features of degraded underwater images, resulting in better restoration performance with domain adaptation. We first propose the Atrous spatial attention module to perform Atrous convolutional expanding on the image receptive field, and then cooperate with the spatial attention mechanism to accurately localize the image fog region. Then, a feature aggregation method called Cross-Scale Skip connection is used to effectively fuse global features rich in spatial location information with local features and integrate them into the decoder to ensure that the recovered area is consistent with the surrounding pixels. Finally, in order to make the recovered image more close to the ground truth image, a novel weighted Euclidean color distance is used instead of L1 distance in this paper, and it is considered as a novel reconstruction loss. We have done extensive experiments to demonstrate that the proposed method is state-of-the-art in terms of performance and is highly adaptable in different aspects. |
doi_str_mv | 10.1007/s00530-023-01246-z |
format | Article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2926605282</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2926605282</sourcerecordid><originalsourceid>FETCH-LOGICAL-c319t-9803c2c345c5b37df151481bfa538ba0f990add9c0f600929e35e521c2713eb53</originalsourceid><addsrcrecordid>eNp9kE1LAzEURYMoWD_-gKuA6-jLy2RmspRiVSi4qeuQSTK1xSY1mVLsrzd2BHeu3ubce3mHkBsOdxyguc8AUgADFAw4VjU7nJAJrwQy3rZ4SiagKmSVqvGcXOS8BuBNLWBCZou4N8ll6uLGrAI1zmwHM6xioLvgfNqbwSe62pilpz68m2D9xoeBmuBo8nmI6QhfkbPefGR__XsvydvscTF9ZvPXp5fpw5xZwdXAVAvCohWVtLITjeu55FXLu95I0XYGeqXAOKcs9DWAQuWF9BK5xYYL30lxSW7H3m2Kn7uyr9dxl0KZ1KiwrkFii4XCkbIp5px8r7epvJC-NAf940uPvnTxpY--9KGExBjKBQ5Ln_6q_0l9A8OFbjY</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2926605282</pqid></control><display><type>article</type><title>Towards domain adaptation underwater image enhancement and restoration</title><source>SpringerLink Journals - AutoHoldings</source><creator>Yang, Chao ; Jiang, Longyu ; Li, Zhicheng ; Huang, Jianxing</creator><creatorcontrib>Yang, Chao ; Jiang, Longyu ; Li, Zhicheng ; Huang, Jianxing</creatorcontrib><description>Currently, deep convolutional neural networks have made significant research progress in the field of underwater image enhancement and restoration. However, most of the existing methods use fixed-scale convolutional kernels, which are easily overfitted in practice, resulting in poor domain adaptation. Therefore, in this paper, we propose an underwater image enhancement and restoration network based on an encoder and decoder framework that focuses on extracting generic features of degraded underwater images, resulting in better restoration performance with domain adaptation. We first propose the Atrous spatial attention module to perform Atrous convolutional expanding on the image receptive field, and then cooperate with the spatial attention mechanism to accurately localize the image fog region. Then, a feature aggregation method called Cross-Scale Skip connection is used to effectively fuse global features rich in spatial location information with local features and integrate them into the decoder to ensure that the recovered area is consistent with the surrounding pixels. Finally, in order to make the recovered image more close to the ground truth image, a novel weighted Euclidean color distance is used instead of L1 distance in this paper, and it is considered as a novel reconstruction loss. We have done extensive experiments to demonstrate that the proposed method is state-of-the-art in terms of performance and is highly adaptable in different aspects.</description><identifier>ISSN: 0942-4962</identifier><identifier>EISSN: 1432-1882</identifier><identifier>DOI: 10.1007/s00530-023-01246-z</identifier><language>eng</language><publisher>Berlin/Heidelberg: Springer Berlin Heidelberg</publisher><subject>Adaptation ; Artificial neural networks ; Computer Communication Networks ; Computer Graphics ; Computer Science ; Cryptology ; Data Storage Representation ; Image enhancement ; Image reconstruction ; Multimedia Information Systems ; Operating Systems ; Regular Paper ; Restoration ; Underwater</subject><ispartof>Multimedia systems, 2024-04, Vol.30 (2), Article 62</ispartof><rights>The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature 2024. Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c319t-9803c2c345c5b37df151481bfa538ba0f990add9c0f600929e35e521c2713eb53</citedby><cites>FETCH-LOGICAL-c319t-9803c2c345c5b37df151481bfa538ba0f990add9c0f600929e35e521c2713eb53</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s00530-023-01246-z$$EPDF$$P50$$Gspringer$$H</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1007/s00530-023-01246-z$$EHTML$$P50$$Gspringer$$H</linktohtml><link.rule.ids>314,776,780,27901,27902,41464,42533,51294</link.rule.ids></links><search><creatorcontrib>Yang, Chao</creatorcontrib><creatorcontrib>Jiang, Longyu</creatorcontrib><creatorcontrib>Li, Zhicheng</creatorcontrib><creatorcontrib>Huang, Jianxing</creatorcontrib><title>Towards domain adaptation underwater image enhancement and restoration</title><title>Multimedia systems</title><addtitle>Multimedia Systems</addtitle><description>Currently, deep convolutional neural networks have made significant research progress in the field of underwater image enhancement and restoration. However, most of the existing methods use fixed-scale convolutional kernels, which are easily overfitted in practice, resulting in poor domain adaptation. Therefore, in this paper, we propose an underwater image enhancement and restoration network based on an encoder and decoder framework that focuses on extracting generic features of degraded underwater images, resulting in better restoration performance with domain adaptation. We first propose the Atrous spatial attention module to perform Atrous convolutional expanding on the image receptive field, and then cooperate with the spatial attention mechanism to accurately localize the image fog region. Then, a feature aggregation method called Cross-Scale Skip connection is used to effectively fuse global features rich in spatial location information with local features and integrate them into the decoder to ensure that the recovered area is consistent with the surrounding pixels. Finally, in order to make the recovered image more close to the ground truth image, a novel weighted Euclidean color distance is used instead of L1 distance in this paper, and it is considered as a novel reconstruction loss. We have done extensive experiments to demonstrate that the proposed method is state-of-the-art in terms of performance and is highly adaptable in different aspects.</description><subject>Adaptation</subject><subject>Artificial neural networks</subject><subject>Computer Communication Networks</subject><subject>Computer Graphics</subject><subject>Computer Science</subject><subject>Cryptology</subject><subject>Data Storage Representation</subject><subject>Image enhancement</subject><subject>Image reconstruction</subject><subject>Multimedia Information Systems</subject><subject>Operating Systems</subject><subject>Regular Paper</subject><subject>Restoration</subject><subject>Underwater</subject><issn>0942-4962</issn><issn>1432-1882</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><recordid>eNp9kE1LAzEURYMoWD_-gKuA6-jLy2RmspRiVSi4qeuQSTK1xSY1mVLsrzd2BHeu3ubce3mHkBsOdxyguc8AUgADFAw4VjU7nJAJrwQy3rZ4SiagKmSVqvGcXOS8BuBNLWBCZou4N8ll6uLGrAI1zmwHM6xioLvgfNqbwSe62pilpz68m2D9xoeBmuBo8nmI6QhfkbPefGR__XsvydvscTF9ZvPXp5fpw5xZwdXAVAvCohWVtLITjeu55FXLu95I0XYGeqXAOKcs9DWAQuWF9BK5xYYL30lxSW7H3m2Kn7uyr9dxl0KZ1KiwrkFii4XCkbIp5px8r7epvJC-NAf940uPvnTxpY--9KGExBjKBQ5Ln_6q_0l9A8OFbjY</recordid><startdate>20240401</startdate><enddate>20240401</enddate><creator>Yang, Chao</creator><creator>Jiang, Longyu</creator><creator>Li, Zhicheng</creator><creator>Huang, Jianxing</creator><general>Springer Berlin Heidelberg</general><general>Springer Nature B.V</general><scope>AAYXX</scope><scope>CITATION</scope></search><sort><creationdate>20240401</creationdate><title>Towards domain adaptation underwater image enhancement and restoration</title><author>Yang, Chao ; Jiang, Longyu ; Li, Zhicheng ; Huang, Jianxing</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c319t-9803c2c345c5b37df151481bfa538ba0f990add9c0f600929e35e521c2713eb53</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Adaptation</topic><topic>Artificial neural networks</topic><topic>Computer Communication Networks</topic><topic>Computer Graphics</topic><topic>Computer Science</topic><topic>Cryptology</topic><topic>Data Storage Representation</topic><topic>Image enhancement</topic><topic>Image reconstruction</topic><topic>Multimedia Information Systems</topic><topic>Operating Systems</topic><topic>Regular Paper</topic><topic>Restoration</topic><topic>Underwater</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Yang, Chao</creatorcontrib><creatorcontrib>Jiang, Longyu</creatorcontrib><creatorcontrib>Li, Zhicheng</creatorcontrib><creatorcontrib>Huang, Jianxing</creatorcontrib><collection>CrossRef</collection><jtitle>Multimedia systems</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Yang, Chao</au><au>Jiang, Longyu</au><au>Li, Zhicheng</au><au>Huang, Jianxing</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Towards domain adaptation underwater image enhancement and restoration</atitle><jtitle>Multimedia systems</jtitle><stitle>Multimedia Systems</stitle><date>2024-04-01</date><risdate>2024</risdate><volume>30</volume><issue>2</issue><artnum>62</artnum><issn>0942-4962</issn><eissn>1432-1882</eissn><abstract>Currently, deep convolutional neural networks have made significant research progress in the field of underwater image enhancement and restoration. However, most of the existing methods use fixed-scale convolutional kernels, which are easily overfitted in practice, resulting in poor domain adaptation. Therefore, in this paper, we propose an underwater image enhancement and restoration network based on an encoder and decoder framework that focuses on extracting generic features of degraded underwater images, resulting in better restoration performance with domain adaptation. We first propose the Atrous spatial attention module to perform Atrous convolutional expanding on the image receptive field, and then cooperate with the spatial attention mechanism to accurately localize the image fog region. Then, a feature aggregation method called Cross-Scale Skip connection is used to effectively fuse global features rich in spatial location information with local features and integrate them into the decoder to ensure that the recovered area is consistent with the surrounding pixels. Finally, in order to make the recovered image more close to the ground truth image, a novel weighted Euclidean color distance is used instead of L1 distance in this paper, and it is considered as a novel reconstruction loss. We have done extensive experiments to demonstrate that the proposed method is state-of-the-art in terms of performance and is highly adaptable in different aspects.</abstract><cop>Berlin/Heidelberg</cop><pub>Springer Berlin Heidelberg</pub><doi>10.1007/s00530-023-01246-z</doi></addata></record> |
fulltext | fulltext |
identifier | ISSN: 0942-4962 |
ispartof | Multimedia systems, 2024-04, Vol.30 (2), Article 62 |
issn | 0942-4962 1432-1882 |
language | eng |
recordid | cdi_proquest_journals_2926605282 |
source | SpringerLink Journals - AutoHoldings |
subjects | Adaptation Artificial neural networks Computer Communication Networks Computer Graphics Computer Science Cryptology Data Storage Representation Image enhancement Image reconstruction Multimedia Information Systems Operating Systems Regular Paper Restoration Underwater |
title | Towards domain adaptation underwater image enhancement and restoration |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-29T11%3A40%3A20IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Towards%20domain%20adaptation%20underwater%20image%20enhancement%20and%20restoration&rft.jtitle=Multimedia%20systems&rft.au=Yang,%20Chao&rft.date=2024-04-01&rft.volume=30&rft.issue=2&rft.artnum=62&rft.issn=0942-4962&rft.eissn=1432-1882&rft_id=info:doi/10.1007/s00530-023-01246-z&rft_dat=%3Cproquest_cross%3E2926605282%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2926605282&rft_id=info:pmid/&rfr_iscdi=true |