Decoupling Common and Unique Representations for Multimodal Self-supervised Learning
The increasing availability of multi-sensor data sparks wide interest in multimodal self-supervised learning. However, most existing approaches learn only common representations across modalities while ignoring intra-modal training and modality-unique representations. We propose Decoupling Common an...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The increasing availability of multi-sensor data sparks wide interest in
multimodal self-supervised learning. However, most existing approaches learn
only common representations across modalities while ignoring intra-modal
training and modality-unique representations. We propose Decoupling Common and
Unique Representations (DeCUR), a simple yet effective method for multimodal
self-supervised learning. By distinguishing inter- and intra-modal embeddings
through multimodal redundancy reduction, DeCUR can integrate complementary
information across different modalities. We evaluate DeCUR in three common
multimodal scenarios (radar-optical, RGB-elevation, and RGB-depth), and
demonstrate its consistent improvement regardless of architectures and for both
multimodal and modality-missing settings. With thorough experiments and
comprehensive analysis, we hope this work can provide valuable insights and
raise more interest in researching the hidden relationships of multimodal
representations. |
---|---|
DOI: | 10.48550/arxiv.2309.05300 |