Semi-Supervised Feature Distillation and Unsupervised Domain Adversarial Distillation for Underwater Image Enhancement

At present, deep learning has demonstrated outstanding performance in the area of underwater image enhancement. However, these approaches often demand substantial computational resources and extended training time. Knowledge distillation is a widely used technique for model compression, and nowadays...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on circuits and systems for video technology 2024-08, Vol.34 (8), p.7671-7682
Hauptverfasser: Qiao, Nianzu, Sun, Changyin, Dong, Lu, Ge, Quanbo
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 7682
container_issue 8
container_start_page 7671
container_title IEEE transactions on circuits and systems for video technology
container_volume 34
creator Qiao, Nianzu
Sun, Changyin
Dong, Lu
Ge, Quanbo
description At present, deep learning has demonstrated outstanding performance in the area of underwater image enhancement. However, these approaches often demand substantial computational resources and extended training time. Knowledge distillation is a widely used technique for model compression, and nowadays it has delivered outstanding results across various fields. However, it has not been utilized in the field of underwater image enhancement. To tackle the aforementioned issues, this paper introduces a knowledge distillation technique for underwater image enhancement for the first time. It is a semi-supervised self-inter feature distillation and unsupervised self-domain adversarial distillation approach. It specifically includes adaptive local self-feature distillation technique, information lossless multi-scale inter-feature distillation technique, and self-domain adversarial distillation approach in LAB-RGB space. Self-feature distillation enhances the performance of the student network by correcting other lossy feature maps with the maximum effective feature map. Inter-feature distillation enables the student network to maximize the potential information learned from the teacher network. Furthermore, an information loss-free pooling approach is suggested to achieve multi-scale loss-free information extraction. Self-domain adversarial distillation boosts the performance of student networks through unsupervised adaptive enhancement in LAB space and unsupervised domain adversarial distillation in RGB space. Finally, a self-inter alternate knowledge distillation training measure is proposed, aiming to maximize the respective benefits of self-inter knowledge distillation. Through extensive comparative experiments, it can be found that student networks with dissimilar structures trained using the knowledge distillation technique designed in this paper achieve outstanding underwater image enhancement results.
doi_str_mv 10.1109/TCSVT.2024.3378252
format Article
fullrecord <record><control><sourceid>crossref_RIE</sourceid><recordid>TN_cdi_crossref_primary_10_1109_TCSVT_2024_3378252</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10474068</ieee_id><sourcerecordid>10_1109_TCSVT_2024_3378252</sourcerecordid><originalsourceid>FETCH-LOGICAL-c219t-ef8e2e19341e75b59d9fa18439619fb24e948e6b034f01c137f78188d4930e2a3</originalsourceid><addsrcrecordid>eNpV0M1OAjEQwPHGaCKiL2A89AUWO_1g2yPhQ0lIPABeN2V3qjW7XdIuGN9eEBL1NHOY3xz-hNwDGwAw87gaL19XA864HAiRa674BemBUjrjnKnLw84UZJqDuiY3KX0wBlLLvEf2S2x8ttxtMe59worO0Ha7iHTiU-fr2na-DdSGiq5D-r2atI31gY6qPcZko7f1f-DaeAAVxk_bYaTzxr4hnYZ3G0psMHS35MrZOuHdefbJejZdjZ-zxcvTfDxaZCUH02XoNHIEIyRgrjbKVMZZ0FKYIRi34RKN1DjcMCEdgxJE7nINWlfSCIbcij7hp79lbFOK6Ipt9I2NXwWw4liu-ClXHMsV53IH9HBCHhH_AJlLNtTiG65SbPg</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Semi-Supervised Feature Distillation and Unsupervised Domain Adversarial Distillation for Underwater Image Enhancement</title><source>IEEE Electronic Library (IEL)</source><creator>Qiao, Nianzu ; Sun, Changyin ; Dong, Lu ; Ge, Quanbo</creator><creatorcontrib>Qiao, Nianzu ; Sun, Changyin ; Dong, Lu ; Ge, Quanbo</creatorcontrib><description>At present, deep learning has demonstrated outstanding performance in the area of underwater image enhancement. However, these approaches often demand substantial computational resources and extended training time. Knowledge distillation is a widely used technique for model compression, and nowadays it has delivered outstanding results across various fields. However, it has not been utilized in the field of underwater image enhancement. To tackle the aforementioned issues, this paper introduces a knowledge distillation technique for underwater image enhancement for the first time. It is a semi-supervised self-inter feature distillation and unsupervised self-domain adversarial distillation approach. It specifically includes adaptive local self-feature distillation technique, information lossless multi-scale inter-feature distillation technique, and self-domain adversarial distillation approach in LAB-RGB space. Self-feature distillation enhances the performance of the student network by correcting other lossy feature maps with the maximum effective feature map. Inter-feature distillation enables the student network to maximize the potential information learned from the teacher network. Furthermore, an information loss-free pooling approach is suggested to achieve multi-scale loss-free information extraction. Self-domain adversarial distillation boosts the performance of student networks through unsupervised adaptive enhancement in LAB space and unsupervised domain adversarial distillation in RGB space. Finally, a self-inter alternate knowledge distillation training measure is proposed, aiming to maximize the respective benefits of self-inter knowledge distillation. Through extensive comparative experiments, it can be found that student networks with dissimilar structures trained using the knowledge distillation technique designed in this paper achieve outstanding underwater image enhancement results.</description><identifier>ISSN: 1051-8215</identifier><identifier>EISSN: 1558-2205</identifier><identifier>DOI: 10.1109/TCSVT.2024.3378252</identifier><identifier>CODEN: ITCTEM</identifier><language>eng</language><publisher>IEEE</publisher><subject>alternate training ; Deep learning ; Degradation ; Histograms ; Image color analysis ; Image enhancement ; Knowledge engineering ; self-domain adversarial distillation ; self-inter feature distillation ; Training ; Underwater image enhancement</subject><ispartof>IEEE transactions on circuits and systems for video technology, 2024-08, Vol.34 (8), p.7671-7682</ispartof><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c219t-ef8e2e19341e75b59d9fa18439619fb24e948e6b034f01c137f78188d4930e2a3</cites><orcidid>0000-0001-9269-334X ; 0000-0003-4124-8320 ; 0000-0001-6737-1381</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10474068$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,776,780,792,27903,27904,54736</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/10474068$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Qiao, Nianzu</creatorcontrib><creatorcontrib>Sun, Changyin</creatorcontrib><creatorcontrib>Dong, Lu</creatorcontrib><creatorcontrib>Ge, Quanbo</creatorcontrib><title>Semi-Supervised Feature Distillation and Unsupervised Domain Adversarial Distillation for Underwater Image Enhancement</title><title>IEEE transactions on circuits and systems for video technology</title><addtitle>TCSVT</addtitle><description>At present, deep learning has demonstrated outstanding performance in the area of underwater image enhancement. However, these approaches often demand substantial computational resources and extended training time. Knowledge distillation is a widely used technique for model compression, and nowadays it has delivered outstanding results across various fields. However, it has not been utilized in the field of underwater image enhancement. To tackle the aforementioned issues, this paper introduces a knowledge distillation technique for underwater image enhancement for the first time. It is a semi-supervised self-inter feature distillation and unsupervised self-domain adversarial distillation approach. It specifically includes adaptive local self-feature distillation technique, information lossless multi-scale inter-feature distillation technique, and self-domain adversarial distillation approach in LAB-RGB space. Self-feature distillation enhances the performance of the student network by correcting other lossy feature maps with the maximum effective feature map. Inter-feature distillation enables the student network to maximize the potential information learned from the teacher network. Furthermore, an information loss-free pooling approach is suggested to achieve multi-scale loss-free information extraction. Self-domain adversarial distillation boosts the performance of student networks through unsupervised adaptive enhancement in LAB space and unsupervised domain adversarial distillation in RGB space. Finally, a self-inter alternate knowledge distillation training measure is proposed, aiming to maximize the respective benefits of self-inter knowledge distillation. Through extensive comparative experiments, it can be found that student networks with dissimilar structures trained using the knowledge distillation technique designed in this paper achieve outstanding underwater image enhancement results.</description><subject>alternate training</subject><subject>Deep learning</subject><subject>Degradation</subject><subject>Histograms</subject><subject>Image color analysis</subject><subject>Image enhancement</subject><subject>Knowledge engineering</subject><subject>self-domain adversarial distillation</subject><subject>self-inter feature distillation</subject><subject>Training</subject><subject>Underwater image enhancement</subject><issn>1051-8215</issn><issn>1558-2205</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNpV0M1OAjEQwPHGaCKiL2A89AUWO_1g2yPhQ0lIPABeN2V3qjW7XdIuGN9eEBL1NHOY3xz-hNwDGwAw87gaL19XA864HAiRa674BemBUjrjnKnLw84UZJqDuiY3KX0wBlLLvEf2S2x8ttxtMe59worO0Ha7iHTiU-fr2na-DdSGiq5D-r2atI31gY6qPcZko7f1f-DaeAAVxk_bYaTzxr4hnYZ3G0psMHS35MrZOuHdefbJejZdjZ-zxcvTfDxaZCUH02XoNHIEIyRgrjbKVMZZ0FKYIRi34RKN1DjcMCEdgxJE7nINWlfSCIbcij7hp79lbFOK6Ipt9I2NXwWw4liu-ClXHMsV53IH9HBCHhH_AJlLNtTiG65SbPg</recordid><startdate>20240801</startdate><enddate>20240801</enddate><creator>Qiao, Nianzu</creator><creator>Sun, Changyin</creator><creator>Dong, Lu</creator><creator>Ge, Quanbo</creator><general>IEEE</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><orcidid>https://orcid.org/0000-0001-9269-334X</orcidid><orcidid>https://orcid.org/0000-0003-4124-8320</orcidid><orcidid>https://orcid.org/0000-0001-6737-1381</orcidid></search><sort><creationdate>20240801</creationdate><title>Semi-Supervised Feature Distillation and Unsupervised Domain Adversarial Distillation for Underwater Image Enhancement</title><author>Qiao, Nianzu ; Sun, Changyin ; Dong, Lu ; Ge, Quanbo</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c219t-ef8e2e19341e75b59d9fa18439619fb24e948e6b034f01c137f78188d4930e2a3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>alternate training</topic><topic>Deep learning</topic><topic>Degradation</topic><topic>Histograms</topic><topic>Image color analysis</topic><topic>Image enhancement</topic><topic>Knowledge engineering</topic><topic>self-domain adversarial distillation</topic><topic>self-inter feature distillation</topic><topic>Training</topic><topic>Underwater image enhancement</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Qiao, Nianzu</creatorcontrib><creatorcontrib>Sun, Changyin</creatorcontrib><creatorcontrib>Dong, Lu</creatorcontrib><creatorcontrib>Ge, Quanbo</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><jtitle>IEEE transactions on circuits and systems for video technology</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Qiao, Nianzu</au><au>Sun, Changyin</au><au>Dong, Lu</au><au>Ge, Quanbo</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Semi-Supervised Feature Distillation and Unsupervised Domain Adversarial Distillation for Underwater Image Enhancement</atitle><jtitle>IEEE transactions on circuits and systems for video technology</jtitle><stitle>TCSVT</stitle><date>2024-08-01</date><risdate>2024</risdate><volume>34</volume><issue>8</issue><spage>7671</spage><epage>7682</epage><pages>7671-7682</pages><issn>1051-8215</issn><eissn>1558-2205</eissn><coden>ITCTEM</coden><abstract>At present, deep learning has demonstrated outstanding performance in the area of underwater image enhancement. However, these approaches often demand substantial computational resources and extended training time. Knowledge distillation is a widely used technique for model compression, and nowadays it has delivered outstanding results across various fields. However, it has not been utilized in the field of underwater image enhancement. To tackle the aforementioned issues, this paper introduces a knowledge distillation technique for underwater image enhancement for the first time. It is a semi-supervised self-inter feature distillation and unsupervised self-domain adversarial distillation approach. It specifically includes adaptive local self-feature distillation technique, information lossless multi-scale inter-feature distillation technique, and self-domain adversarial distillation approach in LAB-RGB space. Self-feature distillation enhances the performance of the student network by correcting other lossy feature maps with the maximum effective feature map. Inter-feature distillation enables the student network to maximize the potential information learned from the teacher network. Furthermore, an information loss-free pooling approach is suggested to achieve multi-scale loss-free information extraction. Self-domain adversarial distillation boosts the performance of student networks through unsupervised adaptive enhancement in LAB space and unsupervised domain adversarial distillation in RGB space. Finally, a self-inter alternate knowledge distillation training measure is proposed, aiming to maximize the respective benefits of self-inter knowledge distillation. Through extensive comparative experiments, it can be found that student networks with dissimilar structures trained using the knowledge distillation technique designed in this paper achieve outstanding underwater image enhancement results.</abstract><pub>IEEE</pub><doi>10.1109/TCSVT.2024.3378252</doi><tpages>12</tpages><orcidid>https://orcid.org/0000-0001-9269-334X</orcidid><orcidid>https://orcid.org/0000-0003-4124-8320</orcidid><orcidid>https://orcid.org/0000-0001-6737-1381</orcidid></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 1051-8215
ispartof IEEE transactions on circuits and systems for video technology, 2024-08, Vol.34 (8), p.7671-7682
issn 1051-8215
1558-2205
language eng
recordid cdi_crossref_primary_10_1109_TCSVT_2024_3378252
source IEEE Electronic Library (IEL)
subjects alternate training
Deep learning
Degradation
Histograms
Image color analysis
Image enhancement
Knowledge engineering
self-domain adversarial distillation
self-inter feature distillation
Training
Underwater image enhancement
title Semi-Supervised Feature Distillation and Unsupervised Domain Adversarial Distillation for Underwater Image Enhancement
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-23T15%3A32%3A07IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-crossref_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Semi-Supervised%20Feature%20Distillation%20and%20Unsupervised%20Domain%20Adversarial%20Distillation%20for%20Underwater%20Image%20Enhancement&rft.jtitle=IEEE%20transactions%20on%20circuits%20and%20systems%20for%20video%20technology&rft.au=Qiao,%20Nianzu&rft.date=2024-08-01&rft.volume=34&rft.issue=8&rft.spage=7671&rft.epage=7682&rft.pages=7671-7682&rft.issn=1051-8215&rft.eissn=1558-2205&rft.coden=ITCTEM&rft_id=info:doi/10.1109/TCSVT.2024.3378252&rft_dat=%3Ccrossref_RIE%3E10_1109_TCSVT_2024_3378252%3C/crossref_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=10474068&rfr_iscdi=true