Decoupling Common and Unique Representations for Multimodal Self-supervised Learning

The increasing availability of multi-sensor data sparks wide interest in multimodal self-supervised learning. However, most existing approaches learn only common representations across modalities while ignoring intra-modal training and modality-unique representations. We propose Decoupling Common an...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Wang, Yi, Albrecht, Conrad M, Braham, Nassim Ait Ali, Liu, Chenying, Xiong, Zhitong, Zhu, Xiao Xiang
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Wang, Yi
Albrecht, Conrad M
Braham, Nassim Ait Ali
Liu, Chenying
Xiong, Zhitong
Zhu, Xiao Xiang
description The increasing availability of multi-sensor data sparks wide interest in multimodal self-supervised learning. However, most existing approaches learn only common representations across modalities while ignoring intra-modal training and modality-unique representations. We propose Decoupling Common and Unique Representations (DeCUR), a simple yet effective method for multimodal self-supervised learning. By distinguishing inter- and intra-modal embeddings through multimodal redundancy reduction, DeCUR can integrate complementary information across different modalities. We evaluate DeCUR in three common multimodal scenarios (radar-optical, RGB-elevation, and RGB-depth), and demonstrate its consistent improvement regardless of architectures and for both multimodal and modality-missing settings. With thorough experiments and comprehensive analysis, we hope this work can provide valuable insights and raise more interest in researching the hidden relationships of multimodal representations.
doi_str_mv 10.48550/arxiv.2309.05300
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2309_05300</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2309_05300</sourcerecordid><originalsourceid>FETCH-arxiv_primary_2309_053003</originalsourceid><addsrcrecordid>eNqFjs0OwUAURmdjIXgAK_cFWkM1YV3Egg21bm70ViaZP3c6DW-Pxt7qbL6c7wgxXch0tc5zOUd-qi5dZnKTyjyTcijKLd1c9FrZOxTOGGcBbQ1Xqx6R4EyeKZBtsVXOBmgcwynqVhlXo4YL6SYJ0RN3KlANR0K2H9NYDBrUgSY_jsRsvyuLQ9L_V56VQX5V346q78j-L95fJj69</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Decoupling Common and Unique Representations for Multimodal Self-supervised Learning</title><source>arXiv.org</source><creator>Wang, Yi ; Albrecht, Conrad M ; Braham, Nassim Ait Ali ; Liu, Chenying ; Xiong, Zhitong ; Zhu, Xiao Xiang</creator><creatorcontrib>Wang, Yi ; Albrecht, Conrad M ; Braham, Nassim Ait Ali ; Liu, Chenying ; Xiong, Zhitong ; Zhu, Xiao Xiang</creatorcontrib><description>The increasing availability of multi-sensor data sparks wide interest in multimodal self-supervised learning. However, most existing approaches learn only common representations across modalities while ignoring intra-modal training and modality-unique representations. We propose Decoupling Common and Unique Representations (DeCUR), a simple yet effective method for multimodal self-supervised learning. By distinguishing inter- and intra-modal embeddings through multimodal redundancy reduction, DeCUR can integrate complementary information across different modalities. We evaluate DeCUR in three common multimodal scenarios (radar-optical, RGB-elevation, and RGB-depth), and demonstrate its consistent improvement regardless of architectures and for both multimodal and modality-missing settings. With thorough experiments and comprehensive analysis, we hope this work can provide valuable insights and raise more interest in researching the hidden relationships of multimodal representations.</description><identifier>DOI: 10.48550/arxiv.2309.05300</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition</subject><creationdate>2023-09</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2309.05300$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2309.05300$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Wang, Yi</creatorcontrib><creatorcontrib>Albrecht, Conrad M</creatorcontrib><creatorcontrib>Braham, Nassim Ait Ali</creatorcontrib><creatorcontrib>Liu, Chenying</creatorcontrib><creatorcontrib>Xiong, Zhitong</creatorcontrib><creatorcontrib>Zhu, Xiao Xiang</creatorcontrib><title>Decoupling Common and Unique Representations for Multimodal Self-supervised Learning</title><description>The increasing availability of multi-sensor data sparks wide interest in multimodal self-supervised learning. However, most existing approaches learn only common representations across modalities while ignoring intra-modal training and modality-unique representations. We propose Decoupling Common and Unique Representations (DeCUR), a simple yet effective method for multimodal self-supervised learning. By distinguishing inter- and intra-modal embeddings through multimodal redundancy reduction, DeCUR can integrate complementary information across different modalities. We evaluate DeCUR in three common multimodal scenarios (radar-optical, RGB-elevation, and RGB-depth), and demonstrate its consistent improvement regardless of architectures and for both multimodal and modality-missing settings. With thorough experiments and comprehensive analysis, we hope this work can provide valuable insights and raise more interest in researching the hidden relationships of multimodal representations.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNqFjs0OwUAURmdjIXgAK_cFWkM1YV3Egg21bm70ViaZP3c6DW-Pxt7qbL6c7wgxXch0tc5zOUd-qi5dZnKTyjyTcijKLd1c9FrZOxTOGGcBbQ1Xqx6R4EyeKZBtsVXOBmgcwynqVhlXo4YL6SYJ0RN3KlANR0K2H9NYDBrUgSY_jsRsvyuLQ9L_V56VQX5V346q78j-L95fJj69</recordid><startdate>20230911</startdate><enddate>20230911</enddate><creator>Wang, Yi</creator><creator>Albrecht, Conrad M</creator><creator>Braham, Nassim Ait Ali</creator><creator>Liu, Chenying</creator><creator>Xiong, Zhitong</creator><creator>Zhu, Xiao Xiang</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20230911</creationdate><title>Decoupling Common and Unique Representations for Multimodal Self-supervised Learning</title><author>Wang, Yi ; Albrecht, Conrad M ; Braham, Nassim Ait Ali ; Liu, Chenying ; Xiong, Zhitong ; Zhu, Xiao Xiang</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-arxiv_primary_2309_053003</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Wang, Yi</creatorcontrib><creatorcontrib>Albrecht, Conrad M</creatorcontrib><creatorcontrib>Braham, Nassim Ait Ali</creatorcontrib><creatorcontrib>Liu, Chenying</creatorcontrib><creatorcontrib>Xiong, Zhitong</creatorcontrib><creatorcontrib>Zhu, Xiao Xiang</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Wang, Yi</au><au>Albrecht, Conrad M</au><au>Braham, Nassim Ait Ali</au><au>Liu, Chenying</au><au>Xiong, Zhitong</au><au>Zhu, Xiao Xiang</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Decoupling Common and Unique Representations for Multimodal Self-supervised Learning</atitle><date>2023-09-11</date><risdate>2023</risdate><abstract>The increasing availability of multi-sensor data sparks wide interest in multimodal self-supervised learning. However, most existing approaches learn only common representations across modalities while ignoring intra-modal training and modality-unique representations. We propose Decoupling Common and Unique Representations (DeCUR), a simple yet effective method for multimodal self-supervised learning. By distinguishing inter- and intra-modal embeddings through multimodal redundancy reduction, DeCUR can integrate complementary information across different modalities. We evaluate DeCUR in three common multimodal scenarios (radar-optical, RGB-elevation, and RGB-depth), and demonstrate its consistent improvement regardless of architectures and for both multimodal and modality-missing settings. With thorough experiments and comprehensive analysis, we hope this work can provide valuable insights and raise more interest in researching the hidden relationships of multimodal representations.</abstract><doi>10.48550/arxiv.2309.05300</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2309.05300
ispartof
issn
language eng
recordid cdi_arxiv_primary_2309_05300
source arXiv.org
subjects Computer Science - Computer Vision and Pattern Recognition
title Decoupling Common and Unique Representations for Multimodal Self-supervised Learning
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-05T03%3A55%3A31IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Decoupling%20Common%20and%20Unique%20Representations%20for%20Multimodal%20Self-supervised%20Learning&rft.au=Wang,%20Yi&rft.date=2023-09-11&rft_id=info:doi/10.48550/arxiv.2309.05300&rft_dat=%3Carxiv_GOX%3E2309_05300%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true