Hypergraph-Based Remaining Prototype Alignment for Open-Set Cross-Domain Image Retrieval
Existing cross-domain image retrieval (CDIR) methods exhibit a strong dependency on prior knowledge of training categories, which leads to problems of class confusion and domain shift when encountering unseen categories in open-set environments. In this paper, we explore the CDIR task towards open-s...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on multimedia 2025-01, p.1-15 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Existing cross-domain image retrieval (CDIR) methods exhibit a strong dependency on prior knowledge of training categories, which leads to problems of class confusion and domain shift when encountering unseen categories in open-set environments. In this paper, we explore the CDIR task towards open-set environments and introduce the Hypergraph-Based Remaining Prototype Alignment (RePro) framework for this task. Specifically, to address the problem of unseen class confusion caused by the category differences, we utilize the Remaining Prototype Embedding (RPE) module to generate the remaining embeddings of images and treat these embeddings as domain noise, rather than directly mapping them to the explicit domain-unified prototypes. To overcome the problem of domain shift, our method leverages the high-order correlations among both domains and categories through the Heterogeneous Structure Alignment (HSA) module, by constructing a heterogeneous hypergraph based on intra-domain and inter-category correlations. Besides, we build two multi-domain datasets for open-set cross-domain image retrieval, i.e. , OCD-PACS and OCD-VLCS. Each dataset is divided into seen and unseen categories for training and testing, and each class has four different domains of images. Extensive experiments and ablation studies on these two datasets demonstrate the superiority of our method over current state-of-the-art methods. |
---|---|
ISSN: | 1520-9210 1941-0077 |
DOI: | 10.1109/TMM.2025.3535298 |