Feature-Based Style Randomization for Domain Generalization

As a recent noticeable topic, domain generalization (DG) aims to first learn a generic model on multiple source domains and then directly generalize to an arbitrary unseen target domain without any additional adaption. In previous DG models, by generating virtual data to supplement observed source d...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on circuits and systems for video technology 2022-08, Vol.32 (8), p.5495-5509
Hauptverfasser: Wang, Yue, Qi, Lei, Shi, Yinghuan, Gao, Yang
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 5509
container_issue 8
container_start_page 5495
container_title IEEE transactions on circuits and systems for video technology
container_volume 32
creator Wang, Yue
Qi, Lei
Shi, Yinghuan
Gao, Yang
description As a recent noticeable topic, domain generalization (DG) aims to first learn a generic model on multiple source domains and then directly generalize to an arbitrary unseen target domain without any additional adaption. In previous DG models, by generating virtual data to supplement observed source domains, the data augmentation based methods have shown its effectiveness. To simulate the possible unseen domains, most of them enrich the diversity of original data via image-level style transformation. However, we argue that the potential styles are hard to be exhaustively illustrated and fully augmented due to the limited referred styles, leading the diversity could not be always guaranteed. Unlike image-level augmentation, we in this paper develop a simple yet effective feature-based style randomization module to achieve feature-level augmentation, which can produce random styles via integrating random noise into the original style. Compared with existing image-level augmentation, our feature-level augmentation favors a more goal-oriented and sample-diverse way. Furthermore, to sufficiently explore the efficacy of the proposed module, we design a novel progressive training strategy to enable all parameters of the network to be fully trained. Extensive experiments on three standard benchmark datasets, i.e. , PACS, VLCS and Office-Home, highlight the superiority of our method compared to the state-of-the-art methods.
doi_str_mv 10.1109/TCSVT.2022.3152615
format Article
fullrecord <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_crossref_primary_10_1109_TCSVT_2022_3152615</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9716108</ieee_id><sourcerecordid>2697571442</sourcerecordid><originalsourceid>FETCH-LOGICAL-c295t-fd9d41475d816e13f03b8bf8a55fa5b5684d76a1b2bb6b796494f429f08080263</originalsourceid><addsrcrecordid>eNo9kE1LAzEQhoMoWKt_QC8Lnrdmspl84EmrrUJBsNVrSLoJbGk3mmwP9de7tUXmMAPzPjPwEHINdARA9d1iPP9cjBhlbFQBMgF4QgaAqErGKJ72M0UoFQM8Jxc5rygFrrgckPuJt902-fLRZl8X82639sW7beu4aX5s18S2CDEVT3Fjm7aY-tYnuz5uLslZsOvsr459SD4mz4vxSzl7m76OH2blkmnsylDrmgOXWCsQHqpAK6dcUBYxWHQoFK-lsOCYc8JJLbjmgTMdqOqLiWpIbg93v1L83vrcmVXcprZ_aZjQEiVwzvoUO6SWKeacfDBfqdnYtDNAzV6S-ZNk9pLMUVIP3Rygxnv_D2gJAqiqfgHfemH3</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2697571442</pqid></control><display><type>article</type><title>Feature-Based Style Randomization for Domain Generalization</title><source>IEEE Electronic Library (IEL)</source><creator>Wang, Yue ; Qi, Lei ; Shi, Yinghuan ; Gao, Yang</creator><creatorcontrib>Wang, Yue ; Qi, Lei ; Shi, Yinghuan ; Gao, Yang</creatorcontrib><description>As a recent noticeable topic, domain generalization (DG) aims to first learn a generic model on multiple source domains and then directly generalize to an arbitrary unseen target domain without any additional adaption. In previous DG models, by generating virtual data to supplement observed source domains, the data augmentation based methods have shown its effectiveness. To simulate the possible unseen domains, most of them enrich the diversity of original data via image-level style transformation. However, we argue that the potential styles are hard to be exhaustively illustrated and fully augmented due to the limited referred styles, leading the diversity could not be always guaranteed. Unlike image-level augmentation, we in this paper develop a simple yet effective feature-based style randomization module to achieve feature-level augmentation, which can produce random styles via integrating random noise into the original style. Compared with existing image-level augmentation, our feature-level augmentation favors a more goal-oriented and sample-diverse way. Furthermore, to sufficiently explore the efficacy of the proposed module, we design a novel progressive training strategy to enable all parameters of the network to be fully trained. Extensive experiments on three standard benchmark datasets, i.e. , PACS, VLCS and Office-Home, highlight the superiority of our method compared to the state-of-the-art methods.</description><identifier>ISSN: 1051-8215</identifier><identifier>EISSN: 1558-2205</identifier><identifier>DOI: 10.1109/TCSVT.2022.3152615</identifier><identifier>CODEN: ITCTEM</identifier><language>eng</language><publisher>New York: IEEE</publisher><subject>Adaptation models ; Data augmentation ; Data models ; Domain generalization ; Domains ; Feature extraction ; Modules ; Random noise ; Randomization ; style randomization ; Task analysis ; Training ; Training data</subject><ispartof>IEEE transactions on circuits and systems for video technology, 2022-08, Vol.32 (8), p.5495-5509</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c295t-fd9d41475d816e13f03b8bf8a55fa5b5684d76a1b2bb6b796494f429f08080263</citedby><cites>FETCH-LOGICAL-c295t-fd9d41475d816e13f03b8bf8a55fa5b5684d76a1b2bb6b796494f429f08080263</cites><orcidid>0000-0003-2100-2067 ; 0000-0001-7091-0702 ; 0000-0003-4534-7318 ; 0000-0002-2488-1813</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9716108$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,776,780,792,27901,27902,54733</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/9716108$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Wang, Yue</creatorcontrib><creatorcontrib>Qi, Lei</creatorcontrib><creatorcontrib>Shi, Yinghuan</creatorcontrib><creatorcontrib>Gao, Yang</creatorcontrib><title>Feature-Based Style Randomization for Domain Generalization</title><title>IEEE transactions on circuits and systems for video technology</title><addtitle>TCSVT</addtitle><description>As a recent noticeable topic, domain generalization (DG) aims to first learn a generic model on multiple source domains and then directly generalize to an arbitrary unseen target domain without any additional adaption. In previous DG models, by generating virtual data to supplement observed source domains, the data augmentation based methods have shown its effectiveness. To simulate the possible unseen domains, most of them enrich the diversity of original data via image-level style transformation. However, we argue that the potential styles are hard to be exhaustively illustrated and fully augmented due to the limited referred styles, leading the diversity could not be always guaranteed. Unlike image-level augmentation, we in this paper develop a simple yet effective feature-based style randomization module to achieve feature-level augmentation, which can produce random styles via integrating random noise into the original style. Compared with existing image-level augmentation, our feature-level augmentation favors a more goal-oriented and sample-diverse way. Furthermore, to sufficiently explore the efficacy of the proposed module, we design a novel progressive training strategy to enable all parameters of the network to be fully trained. Extensive experiments on three standard benchmark datasets, i.e. , PACS, VLCS and Office-Home, highlight the superiority of our method compared to the state-of-the-art methods.</description><subject>Adaptation models</subject><subject>Data augmentation</subject><subject>Data models</subject><subject>Domain generalization</subject><subject>Domains</subject><subject>Feature extraction</subject><subject>Modules</subject><subject>Random noise</subject><subject>Randomization</subject><subject>style randomization</subject><subject>Task analysis</subject><subject>Training</subject><subject>Training data</subject><issn>1051-8215</issn><issn>1558-2205</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNo9kE1LAzEQhoMoWKt_QC8Lnrdmspl84EmrrUJBsNVrSLoJbGk3mmwP9de7tUXmMAPzPjPwEHINdARA9d1iPP9cjBhlbFQBMgF4QgaAqErGKJ72M0UoFQM8Jxc5rygFrrgckPuJt902-fLRZl8X82639sW7beu4aX5s18S2CDEVT3Fjm7aY-tYnuz5uLslZsOvsr459SD4mz4vxSzl7m76OH2blkmnsylDrmgOXWCsQHqpAK6dcUBYxWHQoFK-lsOCYc8JJLbjmgTMdqOqLiWpIbg93v1L83vrcmVXcprZ_aZjQEiVwzvoUO6SWKeacfDBfqdnYtDNAzV6S-ZNk9pLMUVIP3Rygxnv_D2gJAqiqfgHfemH3</recordid><startdate>20220801</startdate><enddate>20220801</enddate><creator>Wang, Yue</creator><creator>Qi, Lei</creator><creator>Shi, Yinghuan</creator><creator>Gao, Yang</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><orcidid>https://orcid.org/0000-0003-2100-2067</orcidid><orcidid>https://orcid.org/0000-0001-7091-0702</orcidid><orcidid>https://orcid.org/0000-0003-4534-7318</orcidid><orcidid>https://orcid.org/0000-0002-2488-1813</orcidid></search><sort><creationdate>20220801</creationdate><title>Feature-Based Style Randomization for Domain Generalization</title><author>Wang, Yue ; Qi, Lei ; Shi, Yinghuan ; Gao, Yang</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c295t-fd9d41475d816e13f03b8bf8a55fa5b5684d76a1b2bb6b796494f429f08080263</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Adaptation models</topic><topic>Data augmentation</topic><topic>Data models</topic><topic>Domain generalization</topic><topic>Domains</topic><topic>Feature extraction</topic><topic>Modules</topic><topic>Random noise</topic><topic>Randomization</topic><topic>style randomization</topic><topic>Task analysis</topic><topic>Training</topic><topic>Training data</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Wang, Yue</creatorcontrib><creatorcontrib>Qi, Lei</creatorcontrib><creatorcontrib>Shi, Yinghuan</creatorcontrib><creatorcontrib>Gao, Yang</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>IEEE transactions on circuits and systems for video technology</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Wang, Yue</au><au>Qi, Lei</au><au>Shi, Yinghuan</au><au>Gao, Yang</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Feature-Based Style Randomization for Domain Generalization</atitle><jtitle>IEEE transactions on circuits and systems for video technology</jtitle><stitle>TCSVT</stitle><date>2022-08-01</date><risdate>2022</risdate><volume>32</volume><issue>8</issue><spage>5495</spage><epage>5509</epage><pages>5495-5509</pages><issn>1051-8215</issn><eissn>1558-2205</eissn><coden>ITCTEM</coden><abstract>As a recent noticeable topic, domain generalization (DG) aims to first learn a generic model on multiple source domains and then directly generalize to an arbitrary unseen target domain without any additional adaption. In previous DG models, by generating virtual data to supplement observed source domains, the data augmentation based methods have shown its effectiveness. To simulate the possible unseen domains, most of them enrich the diversity of original data via image-level style transformation. However, we argue that the potential styles are hard to be exhaustively illustrated and fully augmented due to the limited referred styles, leading the diversity could not be always guaranteed. Unlike image-level augmentation, we in this paper develop a simple yet effective feature-based style randomization module to achieve feature-level augmentation, which can produce random styles via integrating random noise into the original style. Compared with existing image-level augmentation, our feature-level augmentation favors a more goal-oriented and sample-diverse way. Furthermore, to sufficiently explore the efficacy of the proposed module, we design a novel progressive training strategy to enable all parameters of the network to be fully trained. Extensive experiments on three standard benchmark datasets, i.e. , PACS, VLCS and Office-Home, highlight the superiority of our method compared to the state-of-the-art methods.</abstract><cop>New York</cop><pub>IEEE</pub><doi>10.1109/TCSVT.2022.3152615</doi><tpages>15</tpages><orcidid>https://orcid.org/0000-0003-2100-2067</orcidid><orcidid>https://orcid.org/0000-0001-7091-0702</orcidid><orcidid>https://orcid.org/0000-0003-4534-7318</orcidid><orcidid>https://orcid.org/0000-0002-2488-1813</orcidid></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 1051-8215
ispartof IEEE transactions on circuits and systems for video technology, 2022-08, Vol.32 (8), p.5495-5509
issn 1051-8215
1558-2205
language eng
recordid cdi_crossref_primary_10_1109_TCSVT_2022_3152615
source IEEE Electronic Library (IEL)
subjects Adaptation models
Data augmentation
Data models
Domain generalization
Domains
Feature extraction
Modules
Random noise
Randomization
style randomization
Task analysis
Training
Training data
title Feature-Based Style Randomization for Domain Generalization
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-16T10%3A33%3A22IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Feature-Based%20Style%20Randomization%20for%20Domain%20Generalization&rft.jtitle=IEEE%20transactions%20on%20circuits%20and%20systems%20for%20video%20technology&rft.au=Wang,%20Yue&rft.date=2022-08-01&rft.volume=32&rft.issue=8&rft.spage=5495&rft.epage=5509&rft.pages=5495-5509&rft.issn=1051-8215&rft.eissn=1558-2205&rft.coden=ITCTEM&rft_id=info:doi/10.1109/TCSVT.2022.3152615&rft_dat=%3Cproquest_RIE%3E2697571442%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2697571442&rft_id=info:pmid/&rft_ieee_id=9716108&rfr_iscdi=true