Deep Consistent-Inherent Learning for Cross-Modal Subspace Clustering
Deep cross-modal clustering has been developing at a rapid pace and attracted great attention. It aims to pursue a consistent subspace from different modalities by conventional neural network and achieve remarkable clustering performance. However, most existing deep cross-modal clustering methods do...
Gespeichert in:
Veröffentlicht in: | Guidance, Navigation and Control Navigation and Control, 2024-08, Vol.4 (3) |
---|---|
Hauptverfasser: | , |
Format: | Artikel |
Sprache: | eng |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Deep cross-modal clustering has been developing at a rapid pace and attracted great attention. It aims to pursue a consistent subspace from different modalities by conventional neural network and achieve remarkable clustering performance. However, most existing deep cross-modal clustering methods do not simultaneously take care of the inherently different information for each modality and local geometric structure for all cross-modal data, which inevitably results in the degradation of clustering performance. In this paper, we propose a novel method named Deep Consistent-Inherent Cross-Modal Subspace Clustering (i.e. DCCSC) to tackle these problems of cross-modal clustering. Our method can preserve the inherent independence of each modality while exploring the consistent information amongst different modalities. Meanwhile, a neighbor graph is embedded into the proposed deep cross-modal subspace clustering framework to maintain the local geometry structure of the original data and learn a shared subspace representation. Therefore, we integrate the consistent-inherent learning and the local structure learning into a unified deep framework to significantly improve the cross-modal subspace clustering performance. Experimental results demonstrate that our proposed method can achieve the superior clustering performance compared with the state-of-the-art methods on four benchmark datasets. |
---|---|
ISSN: | 2737-4807 2737-4920 |
DOI: | 10.1142/S2737480724410061 |