Towards domain-invariant Self-Supervised Learning with Batch Styles Standardization

In Self-Supervised Learning (SSL), models are typically pretrained, fine-tuned, and evaluated on the same domains. However, they tend to perform poorly when evaluated on unseen domains, a challenge that Unsupervised Domain Generalization (UDG) seeks to address. Current UDG methods rely on domain lab...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Scalbert, Marin, Vakalopoulou, Maria, Couzinié-Devy, Florent
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Scalbert, Marin
Vakalopoulou, Maria
Couzinié-Devy, Florent
description In Self-Supervised Learning (SSL), models are typically pretrained, fine-tuned, and evaluated on the same domains. However, they tend to perform poorly when evaluated on unseen domains, a challenge that Unsupervised Domain Generalization (UDG) seeks to address. Current UDG methods rely on domain labels, which are often challenging to collect, and domain-specific architectures that lack scalability when confronted with numerous domains, making the current methodology impractical and rigid. Inspired by contrastive-based UDG methods that mitigate spurious correlations by restricting comparisons to examples from the same domain, we hypothesize that eliminating style variability within a batch could provide a more convenient and flexible way to reduce spurious correlations without requiring domain labels. To verify this hypothesis, we introduce Batch Styles Standardization (BSS), a relatively simple yet powerful Fourier-based method to standardize the style of images in a batch specifically designed for integration with SSL methods to tackle UDG. Combining BSS with existing SSL methods offers serious advantages over prior UDG methods: (1) It eliminates the need for domain labels or domain-specific network components to enhance domain-invariance in SSL representations, and (2) offers flexibility as BSS can be seamlessly integrated with diverse contrastive-based but also non-contrastive-based SSL methods. Experiments on several UDG datasets demonstrate that it significantly improves downstream task performances on unseen domains, often outperforming or rivaling with UDG methods. Finally, this work clarifies the underlying mechanisms contributing to BSS's effectiveness in improving domain-invariance in SSL representations and performances on unseen domain.
doi_str_mv 10.48550/arxiv.2303.06088
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2303_06088</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2303_06088</sourcerecordid><originalsourceid>FETCH-LOGICAL-a678-4c78837f2a8e041f704a7d35b1f53ff08bc559eb65af1d492f90962527dd95e93</originalsourceid><addsrcrecordid>eNotz7tOwzAYQGEvDKjwAEz4BRwcX2J7LBU3KRJDskd_Yru1lDqVY1LK01NKp7Md6UPooaSF0FLSJ0jfYSkYp7ygFdX6FjXtdIRkZ2ynPYRIQlwgBYgZN270pPk6uLSE2VlcO0gxxC0-hrzDz5CHHW7yaXTzORDt-RJ-IIcp3qEbD-Ps7q9dofb1pd28k_rz7WOzrglUShMxKK258gy0o6L0igpQlsu-9JJ7T3U_SGlcX0nwpRWGeUNNxSRT1hrpDF-hx__tRdUdUthDOnV_uu6i47_CjUsz</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Towards domain-invariant Self-Supervised Learning with Batch Styles Standardization</title><source>arXiv.org</source><creator>Scalbert, Marin ; Vakalopoulou, Maria ; Couzinié-Devy, Florent</creator><creatorcontrib>Scalbert, Marin ; Vakalopoulou, Maria ; Couzinié-Devy, Florent</creatorcontrib><description>In Self-Supervised Learning (SSL), models are typically pretrained, fine-tuned, and evaluated on the same domains. However, they tend to perform poorly when evaluated on unseen domains, a challenge that Unsupervised Domain Generalization (UDG) seeks to address. Current UDG methods rely on domain labels, which are often challenging to collect, and domain-specific architectures that lack scalability when confronted with numerous domains, making the current methodology impractical and rigid. Inspired by contrastive-based UDG methods that mitigate spurious correlations by restricting comparisons to examples from the same domain, we hypothesize that eliminating style variability within a batch could provide a more convenient and flexible way to reduce spurious correlations without requiring domain labels. To verify this hypothesis, we introduce Batch Styles Standardization (BSS), a relatively simple yet powerful Fourier-based method to standardize the style of images in a batch specifically designed for integration with SSL methods to tackle UDG. Combining BSS with existing SSL methods offers serious advantages over prior UDG methods: (1) It eliminates the need for domain labels or domain-specific network components to enhance domain-invariance in SSL representations, and (2) offers flexibility as BSS can be seamlessly integrated with diverse contrastive-based but also non-contrastive-based SSL methods. Experiments on several UDG datasets demonstrate that it significantly improves downstream task performances on unseen domains, often outperforming or rivaling with UDG methods. Finally, this work clarifies the underlying mechanisms contributing to BSS's effectiveness in improving domain-invariance in SSL representations and performances on unseen domain.</description><identifier>DOI: 10.48550/arxiv.2303.06088</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition</subject><creationdate>2023-03</creationdate><rights>http://creativecommons.org/licenses/by-nc-sa/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,777,882</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2303.06088$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2303.06088$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Scalbert, Marin</creatorcontrib><creatorcontrib>Vakalopoulou, Maria</creatorcontrib><creatorcontrib>Couzinié-Devy, Florent</creatorcontrib><title>Towards domain-invariant Self-Supervised Learning with Batch Styles Standardization</title><description>In Self-Supervised Learning (SSL), models are typically pretrained, fine-tuned, and evaluated on the same domains. However, they tend to perform poorly when evaluated on unseen domains, a challenge that Unsupervised Domain Generalization (UDG) seeks to address. Current UDG methods rely on domain labels, which are often challenging to collect, and domain-specific architectures that lack scalability when confronted with numerous domains, making the current methodology impractical and rigid. Inspired by contrastive-based UDG methods that mitigate spurious correlations by restricting comparisons to examples from the same domain, we hypothesize that eliminating style variability within a batch could provide a more convenient and flexible way to reduce spurious correlations without requiring domain labels. To verify this hypothesis, we introduce Batch Styles Standardization (BSS), a relatively simple yet powerful Fourier-based method to standardize the style of images in a batch specifically designed for integration with SSL methods to tackle UDG. Combining BSS with existing SSL methods offers serious advantages over prior UDG methods: (1) It eliminates the need for domain labels or domain-specific network components to enhance domain-invariance in SSL representations, and (2) offers flexibility as BSS can be seamlessly integrated with diverse contrastive-based but also non-contrastive-based SSL methods. Experiments on several UDG datasets demonstrate that it significantly improves downstream task performances on unseen domains, often outperforming or rivaling with UDG methods. Finally, this work clarifies the underlying mechanisms contributing to BSS's effectiveness in improving domain-invariance in SSL representations and performances on unseen domain.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotz7tOwzAYQGEvDKjwAEz4BRwcX2J7LBU3KRJDskd_Yru1lDqVY1LK01NKp7Md6UPooaSF0FLSJ0jfYSkYp7ygFdX6FjXtdIRkZ2ynPYRIQlwgBYgZN270pPk6uLSE2VlcO0gxxC0-hrzDz5CHHW7yaXTzORDt-RJ-IIcp3qEbD-Ps7q9dofb1pd28k_rz7WOzrglUShMxKK258gy0o6L0igpQlsu-9JJ7T3U_SGlcX0nwpRWGeUNNxSRT1hrpDF-hx__tRdUdUthDOnV_uu6i47_CjUsz</recordid><startdate>20230310</startdate><enddate>20230310</enddate><creator>Scalbert, Marin</creator><creator>Vakalopoulou, Maria</creator><creator>Couzinié-Devy, Florent</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20230310</creationdate><title>Towards domain-invariant Self-Supervised Learning with Batch Styles Standardization</title><author>Scalbert, Marin ; Vakalopoulou, Maria ; Couzinié-Devy, Florent</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a678-4c78837f2a8e041f704a7d35b1f53ff08bc559eb65af1d492f90962527dd95e93</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Scalbert, Marin</creatorcontrib><creatorcontrib>Vakalopoulou, Maria</creatorcontrib><creatorcontrib>Couzinié-Devy, Florent</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Scalbert, Marin</au><au>Vakalopoulou, Maria</au><au>Couzinié-Devy, Florent</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Towards domain-invariant Self-Supervised Learning with Batch Styles Standardization</atitle><date>2023-03-10</date><risdate>2023</risdate><abstract>In Self-Supervised Learning (SSL), models are typically pretrained, fine-tuned, and evaluated on the same domains. However, they tend to perform poorly when evaluated on unseen domains, a challenge that Unsupervised Domain Generalization (UDG) seeks to address. Current UDG methods rely on domain labels, which are often challenging to collect, and domain-specific architectures that lack scalability when confronted with numerous domains, making the current methodology impractical and rigid. Inspired by contrastive-based UDG methods that mitigate spurious correlations by restricting comparisons to examples from the same domain, we hypothesize that eliminating style variability within a batch could provide a more convenient and flexible way to reduce spurious correlations without requiring domain labels. To verify this hypothesis, we introduce Batch Styles Standardization (BSS), a relatively simple yet powerful Fourier-based method to standardize the style of images in a batch specifically designed for integration with SSL methods to tackle UDG. Combining BSS with existing SSL methods offers serious advantages over prior UDG methods: (1) It eliminates the need for domain labels or domain-specific network components to enhance domain-invariance in SSL representations, and (2) offers flexibility as BSS can be seamlessly integrated with diverse contrastive-based but also non-contrastive-based SSL methods. Experiments on several UDG datasets demonstrate that it significantly improves downstream task performances on unseen domains, often outperforming or rivaling with UDG methods. Finally, this work clarifies the underlying mechanisms contributing to BSS's effectiveness in improving domain-invariance in SSL representations and performances on unseen domain.</abstract><doi>10.48550/arxiv.2303.06088</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2303.06088
ispartof
issn
language eng
recordid cdi_arxiv_primary_2303_06088
source arXiv.org
subjects Computer Science - Computer Vision and Pattern Recognition
title Towards domain-invariant Self-Supervised Learning with Batch Styles Standardization
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-20T18%3A54%3A21IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Towards%20domain-invariant%20Self-Supervised%20Learning%20with%20Batch%20Styles%20Standardization&rft.au=Scalbert,%20Marin&rft.date=2023-03-10&rft_id=info:doi/10.48550/arxiv.2303.06088&rft_dat=%3Carxiv_GOX%3E2303_06088%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true