On the Generalization and Causal Explanation in Self-Supervised Learning

Self-supervised learning (SSL) methods learn from unlabeled data and achieve high generalization performance on downstream tasks. However, they may also suffer from overfitting to their training data and lose the ability to adapt to new tasks. To investigate this phenomenon, we conduct experiments o...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Qiang, Wenwen, Song, Zeen, Gu, Ziyin, Li, Jiangmeng, Zheng, Changwen, Sun, Fuchun, Xiong, Hui
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Qiang, Wenwen
Song, Zeen
Gu, Ziyin
Li, Jiangmeng
Zheng, Changwen
Sun, Fuchun
Xiong, Hui
description Self-supervised learning (SSL) methods learn from unlabeled data and achieve high generalization performance on downstream tasks. However, they may also suffer from overfitting to their training data and lose the ability to adapt to new tasks. To investigate this phenomenon, we conduct experiments on various SSL methods and datasets and make two observations: (1) Overfitting occurs abruptly in later layers and epochs, while generalizing features are learned in early layers for all epochs; (2) Coding rate reduction can be used as an indicator to measure the degree of overfitting in SSL models. Based on these observations, we propose Undoing Memorization Mechanism (UMM), a plug-and-play method that mitigates overfitting of the pre-trained feature extractor by aligning the feature distributions of the early and the last layers to maximize the coding rate reduction of the last layer output. The learning process of UMM is a bi-level optimization process. We provide a causal analysis of UMM to explain how UMM can help the pre-trained feature extractor overcome overfitting and recover generalization. We also demonstrate that UMM significantly improves the generalization performance of SSL methods on various downstream tasks.
doi_str_mv 10.48550/arxiv.2410.00772
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2410_00772</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2410_00772</sourcerecordid><originalsourceid>FETCH-arxiv_primary_2410_007723</originalsourceid><addsrcrecordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMgEKGBiYmxtxMnj45ymUZKQquKfmpRYl5mRWJZZk5ucpJOalKDgnlhYn5ii4VhTkJOZBhDPzFIJTc9J0g0sLUovKMotTUxR8UhOL8jLz0nkYWNMSc4pTeaE0N4O8m2uIs4cu2M74gqLM3MSiyniQ3fFgu40JqwAAzXw5cg</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>On the Generalization and Causal Explanation in Self-Supervised Learning</title><source>arXiv.org</source><creator>Qiang, Wenwen ; Song, Zeen ; Gu, Ziyin ; Li, Jiangmeng ; Zheng, Changwen ; Sun, Fuchun ; Xiong, Hui</creator><creatorcontrib>Qiang, Wenwen ; Song, Zeen ; Gu, Ziyin ; Li, Jiangmeng ; Zheng, Changwen ; Sun, Fuchun ; Xiong, Hui</creatorcontrib><description>Self-supervised learning (SSL) methods learn from unlabeled data and achieve high generalization performance on downstream tasks. However, they may also suffer from overfitting to their training data and lose the ability to adapt to new tasks. To investigate this phenomenon, we conduct experiments on various SSL methods and datasets and make two observations: (1) Overfitting occurs abruptly in later layers and epochs, while generalizing features are learned in early layers for all epochs; (2) Coding rate reduction can be used as an indicator to measure the degree of overfitting in SSL models. Based on these observations, we propose Undoing Memorization Mechanism (UMM), a plug-and-play method that mitigates overfitting of the pre-trained feature extractor by aligning the feature distributions of the early and the last layers to maximize the coding rate reduction of the last layer output. The learning process of UMM is a bi-level optimization process. We provide a causal analysis of UMM to explain how UMM can help the pre-trained feature extractor overcome overfitting and recover generalization. We also demonstrate that UMM significantly improves the generalization performance of SSL methods on various downstream tasks.</description><identifier>DOI: 10.48550/arxiv.2410.00772</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition ; Computer Science - Learning</subject><creationdate>2024-10</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2410.00772$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2410.00772$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Qiang, Wenwen</creatorcontrib><creatorcontrib>Song, Zeen</creatorcontrib><creatorcontrib>Gu, Ziyin</creatorcontrib><creatorcontrib>Li, Jiangmeng</creatorcontrib><creatorcontrib>Zheng, Changwen</creatorcontrib><creatorcontrib>Sun, Fuchun</creatorcontrib><creatorcontrib>Xiong, Hui</creatorcontrib><title>On the Generalization and Causal Explanation in Self-Supervised Learning</title><description>Self-supervised learning (SSL) methods learn from unlabeled data and achieve high generalization performance on downstream tasks. However, they may also suffer from overfitting to their training data and lose the ability to adapt to new tasks. To investigate this phenomenon, we conduct experiments on various SSL methods and datasets and make two observations: (1) Overfitting occurs abruptly in later layers and epochs, while generalizing features are learned in early layers for all epochs; (2) Coding rate reduction can be used as an indicator to measure the degree of overfitting in SSL models. Based on these observations, we propose Undoing Memorization Mechanism (UMM), a plug-and-play method that mitigates overfitting of the pre-trained feature extractor by aligning the feature distributions of the early and the last layers to maximize the coding rate reduction of the last layer output. The learning process of UMM is a bi-level optimization process. We provide a causal analysis of UMM to explain how UMM can help the pre-trained feature extractor overcome overfitting and recover generalization. We also demonstrate that UMM significantly improves the generalization performance of SSL methods on various downstream tasks.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMgEKGBiYmxtxMnj45ymUZKQquKfmpRYl5mRWJZZk5ucpJOalKDgnlhYn5ii4VhTkJOZBhDPzFIJTc9J0g0sLUovKMotTUxR8UhOL8jLz0nkYWNMSc4pTeaE0N4O8m2uIs4cu2M74gqLM3MSiyniQ3fFgu40JqwAAzXw5cg</recordid><startdate>20241001</startdate><enddate>20241001</enddate><creator>Qiang, Wenwen</creator><creator>Song, Zeen</creator><creator>Gu, Ziyin</creator><creator>Li, Jiangmeng</creator><creator>Zheng, Changwen</creator><creator>Sun, Fuchun</creator><creator>Xiong, Hui</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20241001</creationdate><title>On the Generalization and Causal Explanation in Self-Supervised Learning</title><author>Qiang, Wenwen ; Song, Zeen ; Gu, Ziyin ; Li, Jiangmeng ; Zheng, Changwen ; Sun, Fuchun ; Xiong, Hui</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-arxiv_primary_2410_007723</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Qiang, Wenwen</creatorcontrib><creatorcontrib>Song, Zeen</creatorcontrib><creatorcontrib>Gu, Ziyin</creatorcontrib><creatorcontrib>Li, Jiangmeng</creatorcontrib><creatorcontrib>Zheng, Changwen</creatorcontrib><creatorcontrib>Sun, Fuchun</creatorcontrib><creatorcontrib>Xiong, Hui</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Qiang, Wenwen</au><au>Song, Zeen</au><au>Gu, Ziyin</au><au>Li, Jiangmeng</au><au>Zheng, Changwen</au><au>Sun, Fuchun</au><au>Xiong, Hui</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>On the Generalization and Causal Explanation in Self-Supervised Learning</atitle><date>2024-10-01</date><risdate>2024</risdate><abstract>Self-supervised learning (SSL) methods learn from unlabeled data and achieve high generalization performance on downstream tasks. However, they may also suffer from overfitting to their training data and lose the ability to adapt to new tasks. To investigate this phenomenon, we conduct experiments on various SSL methods and datasets and make two observations: (1) Overfitting occurs abruptly in later layers and epochs, while generalizing features are learned in early layers for all epochs; (2) Coding rate reduction can be used as an indicator to measure the degree of overfitting in SSL models. Based on these observations, we propose Undoing Memorization Mechanism (UMM), a plug-and-play method that mitigates overfitting of the pre-trained feature extractor by aligning the feature distributions of the early and the last layers to maximize the coding rate reduction of the last layer output. The learning process of UMM is a bi-level optimization process. We provide a causal analysis of UMM to explain how UMM can help the pre-trained feature extractor overcome overfitting and recover generalization. We also demonstrate that UMM significantly improves the generalization performance of SSL methods on various downstream tasks.</abstract><doi>10.48550/arxiv.2410.00772</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2410.00772
ispartof
issn
language eng
recordid cdi_arxiv_primary_2410_00772
source arXiv.org
subjects Computer Science - Computer Vision and Pattern Recognition
Computer Science - Learning
title On the Generalization and Causal Explanation in Self-Supervised Learning
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-14T15%3A13%3A09IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=On%20the%20Generalization%20and%20Causal%20Explanation%20in%20Self-Supervised%20Learning&rft.au=Qiang,%20Wenwen&rft.date=2024-10-01&rft_id=info:doi/10.48550/arxiv.2410.00772&rft_dat=%3Carxiv_GOX%3E2410_00772%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true