Dual-Stage Uncertainty Modeling for Unsupervised Cross-Domain 3D Model Retrieval
Unsupervised cross-domain 3D model retrieval aims to retrieve unlabeled 3D models (target domain) using labeled 2D images (source domain). Domain adaptation approaches have shown impressive performance for cross-domain 3D model retrieval. However, conventional methods typically represent samples fro...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on multimedia 2024, Vol.26, p.8996-9007 |
---|---|
Hauptverfasser: | , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Unsupervised cross-domain 3D model retrieval aims to retrieve unlabeled 3D models (target domain) using labeled 2D images (source domain). Domain adaptation approaches have shown impressive performance for cross-domain 3D model retrieval. However, conventional methods typically represent samples from different domains as deterministic points, overlooking the diversity in sample characteristics and relationships. These approaches lead to challenges in achieving a robust representation of both samples and categories. To address above challenges, we propose a dual-stage uncertainty modeling (DSUM) for unsupervised cross-domain 3D model retrieval, which utilizes Gaussian distribution to effectively model the uncertainty characteristics in both sample and class and obtain the robust and domain-invariant representations. Specifically, in the multi-view uncertainty encoding stage, we discard the conventional pooling operations and utilize the uncertainty modeling among multiple views to fuse the common and specific information of 2D images and 3D models. In the cross-domain feature alignment stage, we adopt the Gaussian distribution of samples belonging to the same category, which can well maintain the sample diversity as well as facilitate to eliminate the domain discrepancy. Our method achieves improvements of 2.61% and 2.65% in terms of FT on two cross-domain datasets, respectively, verifying its superiority through extensive qualitative and quantitative experiments. |
---|---|
ISSN: | 1520-9210 1941-0077 |
DOI: | 10.1109/TMM.2024.3384675 |