Unsupervised Cross-Domain Deep-Fused Feature Descriptor for Efficient Image Retrieval

Deep feature-based methods show advantages in image retrieval, yet their robustness and generalization warrant further investigation. Toward this end, we propose a robust unsupervised Cross-domain Deep-fused Feature Descriptor (CDFD) method for efficient image retrieval. It analyzes deep features fr...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE internet of things journal 2024-12, p.1-1
1. Verfasser: He, Qiaoping
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 1
container_issue
container_start_page 1
container_title IEEE internet of things journal
container_volume
creator He, Qiaoping
description Deep feature-based methods show advantages in image retrieval, yet their robustness and generalization warrant further investigation. Toward this end, we propose a robust unsupervised Cross-domain Deep-fused Feature Descriptor (CDFD) method for efficient image retrieval. It analyzes deep features from a frequency domain perspective, employing Discrete Wavelet Transform (DWT) and Discrete Cosine Transform (DCT) to enhance its robustness and distinguishability. Specifically, we begin with a distinct perspective and introduce DWT to identify low-frequency and high-frequency components in deep features. A multi-trait spatial fusion strategy is proposed to integrate various features and strengthen low-frequency information. It can highlight the target region in the image and suppress the background clutters by introducing a low-pass filter, thus improving the discriminating ability of the feature representation. To enhance the robustness of the features, a cross-domain convergence scheme is proposed, which enables the rational integration of spatial and frequency-domain information by utilizing DCT. In addition, a computationally simple decrement query expansion technique is introduced. Using it with CDFD can effectively improve retrieval performance. Extensive comparative experiments on several benchmark datasets demonstrate that our CDFD method is robust and effective in image retrieval and outperforms several unsupervised state-of-the-art methods. The Source code is published at https://github.com/sevenjava/CDFD.
doi_str_mv 10.1109/JIOT.2024.3519175
format Article
fullrecord <record><control><sourceid>crossref_RIE</sourceid><recordid>TN_cdi_ieee_primary_10804638</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10804638</ieee_id><sourcerecordid>10_1109_JIOT_2024_3519175</sourcerecordid><originalsourceid>FETCH-LOGICAL-c638-15ef5579f56d3e9c93e4b3d325035093be03d7d38b26ddf99e1d74c2d526ba0a3</originalsourceid><addsrcrecordid>eNpNkMFqwzAQREVpoSHNBxR68A_YlbSWbB2LkzQpgUBxzka2VkUljo3kBPr3tUkOOSy7DDPL8Ah5ZTRhjKr3r-2-TDjlaQKCKZaJBzLjwLM4lZI_3t3PZBHCL6V0jI1OOSOHwymce_QXF9BEhe9CiJddq90pWiL28fo86WvUw9njKIXGu37ofGTHWVnrGoenIdq2-gejbxy8w4s-vpAnq48BF7c9J-V6VRabeLf_3BYfu7iRkMdMoBUiU1ZIA6gaBZjWYIALCoIqqJGCyQzkNZfGWKWQmSxtuBFc1ppqmBN2fdtMvT3aqveu1f6vYrSayFQTmWoiU93IjJm3a8Yh4p0_p-nYCf4B6s5f2w</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Unsupervised Cross-Domain Deep-Fused Feature Descriptor for Efficient Image Retrieval</title><source>IEEE Electronic Library (IEL)</source><creator>He, Qiaoping</creator><creatorcontrib>He, Qiaoping</creatorcontrib><description>Deep feature-based methods show advantages in image retrieval, yet their robustness and generalization warrant further investigation. Toward this end, we propose a robust unsupervised Cross-domain Deep-fused Feature Descriptor (CDFD) method for efficient image retrieval. It analyzes deep features from a frequency domain perspective, employing Discrete Wavelet Transform (DWT) and Discrete Cosine Transform (DCT) to enhance its robustness and distinguishability. Specifically, we begin with a distinct perspective and introduce DWT to identify low-frequency and high-frequency components in deep features. A multi-trait spatial fusion strategy is proposed to integrate various features and strengthen low-frequency information. It can highlight the target region in the image and suppress the background clutters by introducing a low-pass filter, thus improving the discriminating ability of the feature representation. To enhance the robustness of the features, a cross-domain convergence scheme is proposed, which enables the rational integration of spatial and frequency-domain information by utilizing DCT. In addition, a computationally simple decrement query expansion technique is introduced. Using it with CDFD can effectively improve retrieval performance. Extensive comparative experiments on several benchmark datasets demonstrate that our CDFD method is robust and effective in image retrieval and outperforms several unsupervised state-of-the-art methods. The Source code is published at https://github.com/sevenjava/CDFD.</description><identifier>ISSN: 2327-4662</identifier><identifier>EISSN: 2327-4662</identifier><identifier>DOI: 10.1109/JIOT.2024.3519175</identifier><identifier>CODEN: IITJAU</identifier><language>eng</language><publisher>IEEE</publisher><subject>Computational modeling ; Computer vision ; Cross-domain deep-fused feature descriptor ; Deep features ; Discrete cosine transform ; Discrete cosine transforms ; Discrete wavelet transform ; Discrete wavelet transforms ; Feature extraction ; Frequency-domain analysis ; Image retrieval ; Robustness ; Transforms ; Vectors</subject><ispartof>IEEE internet of things journal, 2024-12, p.1-1</ispartof><woscitedreferencessubscribed>false</woscitedreferencessubscribed><orcidid>0000-0002-7214-3866</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10804638$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,776,780,792,27903,27904,54737</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/10804638$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>He, Qiaoping</creatorcontrib><title>Unsupervised Cross-Domain Deep-Fused Feature Descriptor for Efficient Image Retrieval</title><title>IEEE internet of things journal</title><addtitle>JIoT</addtitle><description>Deep feature-based methods show advantages in image retrieval, yet their robustness and generalization warrant further investigation. Toward this end, we propose a robust unsupervised Cross-domain Deep-fused Feature Descriptor (CDFD) method for efficient image retrieval. It analyzes deep features from a frequency domain perspective, employing Discrete Wavelet Transform (DWT) and Discrete Cosine Transform (DCT) to enhance its robustness and distinguishability. Specifically, we begin with a distinct perspective and introduce DWT to identify low-frequency and high-frequency components in deep features. A multi-trait spatial fusion strategy is proposed to integrate various features and strengthen low-frequency information. It can highlight the target region in the image and suppress the background clutters by introducing a low-pass filter, thus improving the discriminating ability of the feature representation. To enhance the robustness of the features, a cross-domain convergence scheme is proposed, which enables the rational integration of spatial and frequency-domain information by utilizing DCT. In addition, a computationally simple decrement query expansion technique is introduced. Using it with CDFD can effectively improve retrieval performance. Extensive comparative experiments on several benchmark datasets demonstrate that our CDFD method is robust and effective in image retrieval and outperforms several unsupervised state-of-the-art methods. The Source code is published at https://github.com/sevenjava/CDFD.</description><subject>Computational modeling</subject><subject>Computer vision</subject><subject>Cross-domain deep-fused feature descriptor</subject><subject>Deep features</subject><subject>Discrete cosine transform</subject><subject>Discrete cosine transforms</subject><subject>Discrete wavelet transform</subject><subject>Discrete wavelet transforms</subject><subject>Feature extraction</subject><subject>Frequency-domain analysis</subject><subject>Image retrieval</subject><subject>Robustness</subject><subject>Transforms</subject><subject>Vectors</subject><issn>2327-4662</issn><issn>2327-4662</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNpNkMFqwzAQREVpoSHNBxR68A_YlbSWbB2LkzQpgUBxzka2VkUljo3kBPr3tUkOOSy7DDPL8Ah5ZTRhjKr3r-2-TDjlaQKCKZaJBzLjwLM4lZI_3t3PZBHCL6V0jI1OOSOHwymce_QXF9BEhe9CiJddq90pWiL28fo86WvUw9njKIXGu37ofGTHWVnrGoenIdq2-gejbxy8w4s-vpAnq48BF7c9J-V6VRabeLf_3BYfu7iRkMdMoBUiU1ZIA6gaBZjWYIALCoIqqJGCyQzkNZfGWKWQmSxtuBFc1ppqmBN2fdtMvT3aqveu1f6vYrSayFQTmWoiU93IjJm3a8Yh4p0_p-nYCf4B6s5f2w</recordid><startdate>20241216</startdate><enddate>20241216</enddate><creator>He, Qiaoping</creator><general>IEEE</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><orcidid>https://orcid.org/0000-0002-7214-3866</orcidid></search><sort><creationdate>20241216</creationdate><title>Unsupervised Cross-Domain Deep-Fused Feature Descriptor for Efficient Image Retrieval</title><author>He, Qiaoping</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c638-15ef5579f56d3e9c93e4b3d325035093be03d7d38b26ddf99e1d74c2d526ba0a3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computational modeling</topic><topic>Computer vision</topic><topic>Cross-domain deep-fused feature descriptor</topic><topic>Deep features</topic><topic>Discrete cosine transform</topic><topic>Discrete cosine transforms</topic><topic>Discrete wavelet transform</topic><topic>Discrete wavelet transforms</topic><topic>Feature extraction</topic><topic>Frequency-domain analysis</topic><topic>Image retrieval</topic><topic>Robustness</topic><topic>Transforms</topic><topic>Vectors</topic><toplevel>online_resources</toplevel><creatorcontrib>He, Qiaoping</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005–Present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998–Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><jtitle>IEEE internet of things journal</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>He, Qiaoping</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Unsupervised Cross-Domain Deep-Fused Feature Descriptor for Efficient Image Retrieval</atitle><jtitle>IEEE internet of things journal</jtitle><stitle>JIoT</stitle><date>2024-12-16</date><risdate>2024</risdate><spage>1</spage><epage>1</epage><pages>1-1</pages><issn>2327-4662</issn><eissn>2327-4662</eissn><coden>IITJAU</coden><abstract>Deep feature-based methods show advantages in image retrieval, yet their robustness and generalization warrant further investigation. Toward this end, we propose a robust unsupervised Cross-domain Deep-fused Feature Descriptor (CDFD) method for efficient image retrieval. It analyzes deep features from a frequency domain perspective, employing Discrete Wavelet Transform (DWT) and Discrete Cosine Transform (DCT) to enhance its robustness and distinguishability. Specifically, we begin with a distinct perspective and introduce DWT to identify low-frequency and high-frequency components in deep features. A multi-trait spatial fusion strategy is proposed to integrate various features and strengthen low-frequency information. It can highlight the target region in the image and suppress the background clutters by introducing a low-pass filter, thus improving the discriminating ability of the feature representation. To enhance the robustness of the features, a cross-domain convergence scheme is proposed, which enables the rational integration of spatial and frequency-domain information by utilizing DCT. In addition, a computationally simple decrement query expansion technique is introduced. Using it with CDFD can effectively improve retrieval performance. Extensive comparative experiments on several benchmark datasets demonstrate that our CDFD method is robust and effective in image retrieval and outperforms several unsupervised state-of-the-art methods. The Source code is published at https://github.com/sevenjava/CDFD.</abstract><pub>IEEE</pub><doi>10.1109/JIOT.2024.3519175</doi><tpages>1</tpages><orcidid>https://orcid.org/0000-0002-7214-3866</orcidid></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 2327-4662
ispartof IEEE internet of things journal, 2024-12, p.1-1
issn 2327-4662
2327-4662
language eng
recordid cdi_ieee_primary_10804638
source IEEE Electronic Library (IEL)
subjects Computational modeling
Computer vision
Cross-domain deep-fused feature descriptor
Deep features
Discrete cosine transform
Discrete cosine transforms
Discrete wavelet transform
Discrete wavelet transforms
Feature extraction
Frequency-domain analysis
Image retrieval
Robustness
Transforms
Vectors
title Unsupervised Cross-Domain Deep-Fused Feature Descriptor for Efficient Image Retrieval
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-22T21%3A22%3A50IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-crossref_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Unsupervised%20Cross-Domain%20Deep-Fused%20Feature%20Descriptor%20for%20Efficient%20Image%20Retrieval&rft.jtitle=IEEE%20internet%20of%20things%20journal&rft.au=He,%20Qiaoping&rft.date=2024-12-16&rft.spage=1&rft.epage=1&rft.pages=1-1&rft.issn=2327-4662&rft.eissn=2327-4662&rft.coden=IITJAU&rft_id=info:doi/10.1109/JIOT.2024.3519175&rft_dat=%3Ccrossref_RIE%3E10_1109_JIOT_2024_3519175%3C/crossref_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=10804638&rfr_iscdi=true