Learning Cruxes to Push for Object Detection in Low-Quality Images
Highly degraded images greatly challenge existing algorithms to detect objects of interest in adverse scenarios, such as rain, fog, and underwater. Recently, researchers develop sophisticated deep architectures in order to enhance image quality. Unfortunately, the visually appealing output of the en...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on circuits and systems for video technology 2024-12, Vol.34 (12), p.12233-12243 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 12243 |
---|---|
container_issue | 12 |
container_start_page | 12233 |
container_title | IEEE transactions on circuits and systems for video technology |
container_volume | 34 |
creator | Fu, Chenping Xiao, Jiewen Yuan, Wanqi Liu, Risheng Fan, Xin |
description | Highly degraded images greatly challenge existing algorithms to detect objects of interest in adverse scenarios, such as rain, fog, and underwater. Recently, researchers develop sophisticated deep architectures in order to enhance image quality. Unfortunately, the visually appealing output of the enhancement module does not necessarily generate high accuracy for deep detectors. Another feasible solution for low-quality image detection is to transform it into a domain adaptation problem. Typically, these approaches invoke complicated training strategies such as adversarial learning and graph matching. False detection is likely to occur in local regions of a low-quality image. In this paper, we propose a simple yet effective strategy with two learners for low-quality image detection. We devise the crux learner to generate cruxes that have great impacts on detection performance. The catch-up leaner with a simple residual transfer mechanism maps the feature distributions of crux regions to those favouring a deep detector. These two learners can be plugged into any CNN-based feature extraction networks, e.g., ResNetXT101 and ResNet50, and yield high detection accuracy on various degraded scenarios. Extensive experiments on several public datasets demonstrate that our method achieves more promising results than state-of-the-art detection approaches. The codes: https://github.com/xiaoDetection/learning-cruxes-to-push . |
doi_str_mv | 10.1109/TCSVT.2024.3432580 |
format | Article |
fullrecord | <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_ieee_primary_10606519</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10606519</ieee_id><sourcerecordid>3147528791</sourcerecordid><originalsourceid>FETCH-LOGICAL-c247t-ba8cdc80ab32cb89c443be02d772fc3e29f692866df30ceb47a552ef59b78dad3</originalsourceid><addsrcrecordid>eNpNkMtOwzAQRS0EEqXwA4iFJdYpfsbOEsKrUqSCKGwt25mUVG1S7ETQvyelXbC6s7jnjnQQuqRkQinJbub528d8wggTEy44k5ocoRGVUieMEXk83ETSRDMqT9FZjEtCqNBCjdBdATY0dbPAeeh_IOKuxS99_MRVG_DMLcF3-B66Ieq2wXWDi_Y7ee3tqu62eLq2C4jn6KSyqwgXhxyj98eHef6cFLOnaX5bJJ4J1SXOal96TazjzDudeSG4A8JKpVjlObCsSjOm07SsOPHghLJSMqhk5pQubcnH6Hq_uwntVw-xM8u2D83w0nAqlGRaZXRosX3LhzbGAJXZhHptw9ZQYnauzJ8rs3NlDq4G6GoP1QDwD0hJKmnGfwGUKmVj</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3147528791</pqid></control><display><type>article</type><title>Learning Cruxes to Push for Object Detection in Low-Quality Images</title><source>IEEE Electronic Library (IEL)</source><creator>Fu, Chenping ; Xiao, Jiewen ; Yuan, Wanqi ; Liu, Risheng ; Fan, Xin</creator><creatorcontrib>Fu, Chenping ; Xiao, Jiewen ; Yuan, Wanqi ; Liu, Risheng ; Fan, Xin</creatorcontrib><description>Highly degraded images greatly challenge existing algorithms to detect objects of interest in adverse scenarios, such as rain, fog, and underwater. Recently, researchers develop sophisticated deep architectures in order to enhance image quality. Unfortunately, the visually appealing output of the enhancement module does not necessarily generate high accuracy for deep detectors. Another feasible solution for low-quality image detection is to transform it into a domain adaptation problem. Typically, these approaches invoke complicated training strategies such as adversarial learning and graph matching. False detection is likely to occur in local regions of a low-quality image. In this paper, we propose a simple yet effective strategy with two learners for low-quality image detection. We devise the crux learner to generate cruxes that have great impacts on detection performance. The catch-up leaner with a simple residual transfer mechanism maps the feature distributions of crux regions to those favouring a deep detector. These two learners can be plugged into any CNN-based feature extraction networks, e.g., ResNetXT101 and ResNet50, and yield high detection accuracy on various degraded scenarios. Extensive experiments on several public datasets demonstrate that our method achieves more promising results than state-of-the-art detection approaches. The codes: https://github.com/xiaoDetection/learning-cruxes-to-push .</description><identifier>ISSN: 1051-8215</identifier><identifier>EISSN: 1558-2205</identifier><identifier>DOI: 10.1109/TCSVT.2024.3432580</identifier><identifier>CODEN: ITCTEM</identifier><language>eng</language><publisher>New York: IEEE</publisher><subject>Accuracy ; Algorithms ; Degradation ; Detection algorithms ; Feature extraction ; Graph matching ; Image analysis ; Image detection ; Image enhancement ; Image quality ; low-quality scenes ; Machine learning ; Object detection ; Object recognition ; Training</subject><ispartof>IEEE transactions on circuits and systems for video technology, 2024-12, Vol.34 (12), p.12233-12243</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2024</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c247t-ba8cdc80ab32cb89c443be02d772fc3e29f692866df30ceb47a552ef59b78dad3</cites><orcidid>0000-0002-8991-4188 ; 0000-0002-9554-0565 ; 0000-0003-2813-8434 ; 0009-0006-7001-9913</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10606519$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,776,780,792,27901,27902,54733</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/10606519$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Fu, Chenping</creatorcontrib><creatorcontrib>Xiao, Jiewen</creatorcontrib><creatorcontrib>Yuan, Wanqi</creatorcontrib><creatorcontrib>Liu, Risheng</creatorcontrib><creatorcontrib>Fan, Xin</creatorcontrib><title>Learning Cruxes to Push for Object Detection in Low-Quality Images</title><title>IEEE transactions on circuits and systems for video technology</title><addtitle>TCSVT</addtitle><description>Highly degraded images greatly challenge existing algorithms to detect objects of interest in adverse scenarios, such as rain, fog, and underwater. Recently, researchers develop sophisticated deep architectures in order to enhance image quality. Unfortunately, the visually appealing output of the enhancement module does not necessarily generate high accuracy for deep detectors. Another feasible solution for low-quality image detection is to transform it into a domain adaptation problem. Typically, these approaches invoke complicated training strategies such as adversarial learning and graph matching. False detection is likely to occur in local regions of a low-quality image. In this paper, we propose a simple yet effective strategy with two learners for low-quality image detection. We devise the crux learner to generate cruxes that have great impacts on detection performance. The catch-up leaner with a simple residual transfer mechanism maps the feature distributions of crux regions to those favouring a deep detector. These two learners can be plugged into any CNN-based feature extraction networks, e.g., ResNetXT101 and ResNet50, and yield high detection accuracy on various degraded scenarios. Extensive experiments on several public datasets demonstrate that our method achieves more promising results than state-of-the-art detection approaches. The codes: https://github.com/xiaoDetection/learning-cruxes-to-push .</description><subject>Accuracy</subject><subject>Algorithms</subject><subject>Degradation</subject><subject>Detection algorithms</subject><subject>Feature extraction</subject><subject>Graph matching</subject><subject>Image analysis</subject><subject>Image detection</subject><subject>Image enhancement</subject><subject>Image quality</subject><subject>low-quality scenes</subject><subject>Machine learning</subject><subject>Object detection</subject><subject>Object recognition</subject><subject>Training</subject><issn>1051-8215</issn><issn>1558-2205</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNpNkMtOwzAQRS0EEqXwA4iFJdYpfsbOEsKrUqSCKGwt25mUVG1S7ETQvyelXbC6s7jnjnQQuqRkQinJbub528d8wggTEy44k5ocoRGVUieMEXk83ETSRDMqT9FZjEtCqNBCjdBdATY0dbPAeeh_IOKuxS99_MRVG_DMLcF3-B66Ieq2wXWDi_Y7ee3tqu62eLq2C4jn6KSyqwgXhxyj98eHef6cFLOnaX5bJJ4J1SXOal96TazjzDudeSG4A8JKpVjlObCsSjOm07SsOPHghLJSMqhk5pQubcnH6Hq_uwntVw-xM8u2D83w0nAqlGRaZXRosX3LhzbGAJXZhHptw9ZQYnauzJ8rs3NlDq4G6GoP1QDwD0hJKmnGfwGUKmVj</recordid><startdate>20241201</startdate><enddate>20241201</enddate><creator>Fu, Chenping</creator><creator>Xiao, Jiewen</creator><creator>Yuan, Wanqi</creator><creator>Liu, Risheng</creator><creator>Fan, Xin</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><orcidid>https://orcid.org/0000-0002-8991-4188</orcidid><orcidid>https://orcid.org/0000-0002-9554-0565</orcidid><orcidid>https://orcid.org/0000-0003-2813-8434</orcidid><orcidid>https://orcid.org/0009-0006-7001-9913</orcidid></search><sort><creationdate>20241201</creationdate><title>Learning Cruxes to Push for Object Detection in Low-Quality Images</title><author>Fu, Chenping ; Xiao, Jiewen ; Yuan, Wanqi ; Liu, Risheng ; Fan, Xin</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c247t-ba8cdc80ab32cb89c443be02d772fc3e29f692866df30ceb47a552ef59b78dad3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Accuracy</topic><topic>Algorithms</topic><topic>Degradation</topic><topic>Detection algorithms</topic><topic>Feature extraction</topic><topic>Graph matching</topic><topic>Image analysis</topic><topic>Image detection</topic><topic>Image enhancement</topic><topic>Image quality</topic><topic>low-quality scenes</topic><topic>Machine learning</topic><topic>Object detection</topic><topic>Object recognition</topic><topic>Training</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Fu, Chenping</creatorcontrib><creatorcontrib>Xiao, Jiewen</creatorcontrib><creatorcontrib>Yuan, Wanqi</creatorcontrib><creatorcontrib>Liu, Risheng</creatorcontrib><creatorcontrib>Fan, Xin</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics & Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>IEEE transactions on circuits and systems for video technology</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Fu, Chenping</au><au>Xiao, Jiewen</au><au>Yuan, Wanqi</au><au>Liu, Risheng</au><au>Fan, Xin</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Learning Cruxes to Push for Object Detection in Low-Quality Images</atitle><jtitle>IEEE transactions on circuits and systems for video technology</jtitle><stitle>TCSVT</stitle><date>2024-12-01</date><risdate>2024</risdate><volume>34</volume><issue>12</issue><spage>12233</spage><epage>12243</epage><pages>12233-12243</pages><issn>1051-8215</issn><eissn>1558-2205</eissn><coden>ITCTEM</coden><abstract>Highly degraded images greatly challenge existing algorithms to detect objects of interest in adverse scenarios, such as rain, fog, and underwater. Recently, researchers develop sophisticated deep architectures in order to enhance image quality. Unfortunately, the visually appealing output of the enhancement module does not necessarily generate high accuracy for deep detectors. Another feasible solution for low-quality image detection is to transform it into a domain adaptation problem. Typically, these approaches invoke complicated training strategies such as adversarial learning and graph matching. False detection is likely to occur in local regions of a low-quality image. In this paper, we propose a simple yet effective strategy with two learners for low-quality image detection. We devise the crux learner to generate cruxes that have great impacts on detection performance. The catch-up leaner with a simple residual transfer mechanism maps the feature distributions of crux regions to those favouring a deep detector. These two learners can be plugged into any CNN-based feature extraction networks, e.g., ResNetXT101 and ResNet50, and yield high detection accuracy on various degraded scenarios. Extensive experiments on several public datasets demonstrate that our method achieves more promising results than state-of-the-art detection approaches. The codes: https://github.com/xiaoDetection/learning-cruxes-to-push .</abstract><cop>New York</cop><pub>IEEE</pub><doi>10.1109/TCSVT.2024.3432580</doi><tpages>11</tpages><orcidid>https://orcid.org/0000-0002-8991-4188</orcidid><orcidid>https://orcid.org/0000-0002-9554-0565</orcidid><orcidid>https://orcid.org/0000-0003-2813-8434</orcidid><orcidid>https://orcid.org/0009-0006-7001-9913</orcidid></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | ISSN: 1051-8215 |
ispartof | IEEE transactions on circuits and systems for video technology, 2024-12, Vol.34 (12), p.12233-12243 |
issn | 1051-8215 1558-2205 |
language | eng |
recordid | cdi_ieee_primary_10606519 |
source | IEEE Electronic Library (IEL) |
subjects | Accuracy Algorithms Degradation Detection algorithms Feature extraction Graph matching Image analysis Image detection Image enhancement Image quality low-quality scenes Machine learning Object detection Object recognition Training |
title | Learning Cruxes to Push for Object Detection in Low-Quality Images |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-21T19%3A45%3A24IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Learning%20Cruxes%20to%20Push%20for%20Object%20Detection%20in%20Low-Quality%20Images&rft.jtitle=IEEE%20transactions%20on%20circuits%20and%20systems%20for%20video%20technology&rft.au=Fu,%20Chenping&rft.date=2024-12-01&rft.volume=34&rft.issue=12&rft.spage=12233&rft.epage=12243&rft.pages=12233-12243&rft.issn=1051-8215&rft.eissn=1558-2205&rft.coden=ITCTEM&rft_id=info:doi/10.1109/TCSVT.2024.3432580&rft_dat=%3Cproquest_RIE%3E3147528791%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3147528791&rft_id=info:pmid/&rft_ieee_id=10606519&rfr_iscdi=true |