Multi-level Cross-modal Alignment for Image Clustering
Recently, the cross-modal pretraining model has been employed to produce meaningful pseudo-labels to supervise the training of an image clustering model. However, numerous erroneous alignments in a cross-modal pre-training model could produce poor-quality pseudo-labels and degrade clustering perform...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Recently, the cross-modal pretraining model has been employed to produce
meaningful pseudo-labels to supervise the training of an image clustering
model. However, numerous erroneous alignments in a cross-modal pre-training
model could produce poor-quality pseudo-labels and degrade clustering
performance. To solve the aforementioned issue, we propose a novel
\textbf{Multi-level Cross-modal Alignment} method to improve the alignments in
a cross-modal pretraining model for downstream tasks, by building a smaller but
better semantic space and aligning the images and texts in three levels, i.e.,
instance-level, prototype-level, and semantic-level. Theoretical results show
that our proposed method converges, and suggests effective means to reduce the
expected clustering risk of our method. Experimental results on five benchmark
datasets clearly show the superiority of our new method. |
---|---|
DOI: | 10.48550/arxiv.2401.11740 |