Reappraising Domain Generalization in Neural Networks

Given that Neural Networks generalize unreasonably well in the IID setting (with benign overfitting and betterment in performance with more parameters), OOD presents a consistent failure case to better the understanding of how they learn. This paper focuses on Domain Generalization (DG), which is pe...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2022-04
Hauptverfasser: Sivaprasad, Sarath, Goindani, Akshay, Garg, Vaibhav, Basu, Ritam, Kosgi, Saiteja, Gandhi, Vineet
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Sivaprasad, Sarath
Goindani, Akshay
Garg, Vaibhav
Basu, Ritam
Kosgi, Saiteja
Gandhi, Vineet
description Given that Neural Networks generalize unreasonably well in the IID setting (with benign overfitting and betterment in performance with more parameters), OOD presents a consistent failure case to better the understanding of how they learn. This paper focuses on Domain Generalization (DG), which is perceived as the front face of OOD generalization. We find that the presence of multiple domains incentivizes domain agnostic learning and is the primary reason for generalization in Tradition DG. We show that the state-of-the-art results can be obtained by borrowing ideas from IID generalization and the DG tailored methods fail to add any performance gains. Furthermore, we perform explorations beyond the Traditional DG (TDG) formulation and propose a novel ClassWise DG (CWDG) benchmark, where for each class, we randomly select one of the domains and keep it aside for testing. Despite being exposed to all domains during training, CWDG is more challenging than TDG evaluation. We propose a novel iterative domain feature masking approach, achieving state-of-the-art results on the CWDG benchmark. Overall, while explaining these observations, our work furthers insights into the learning mechanisms of neural networks.
format Article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2582898032</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2582898032</sourcerecordid><originalsourceid>FETCH-proquest_journals_25828980323</originalsourceid><addsrcrecordid>eNpjYuA0MjY21LUwMTLiYOAtLs4yMDAwMjM3MjU15mQwDUpNLCgoSswszsxLV3DJz03MzFNwT81LLUrMyaxKLMnMz1MAivillgIFgFRJeX5RdjEPA2taYk5xKi-U5mZQdnMNcfbQLSjKLyxNLS6Jz8ovLcoDSsUbmVoYWVhaGBgbGROnCgC0-TVM</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2582898032</pqid></control><display><type>article</type><title>Reappraising Domain Generalization in Neural Networks</title><source>Free E- Journals</source><creator>Sivaprasad, Sarath ; Goindani, Akshay ; Garg, Vaibhav ; Basu, Ritam ; Kosgi, Saiteja ; Gandhi, Vineet</creator><creatorcontrib>Sivaprasad, Sarath ; Goindani, Akshay ; Garg, Vaibhav ; Basu, Ritam ; Kosgi, Saiteja ; Gandhi, Vineet</creatorcontrib><description>Given that Neural Networks generalize unreasonably well in the IID setting (with benign overfitting and betterment in performance with more parameters), OOD presents a consistent failure case to better the understanding of how they learn. This paper focuses on Domain Generalization (DG), which is perceived as the front face of OOD generalization. We find that the presence of multiple domains incentivizes domain agnostic learning and is the primary reason for generalization in Tradition DG. We show that the state-of-the-art results can be obtained by borrowing ideas from IID generalization and the DG tailored methods fail to add any performance gains. Furthermore, we perform explorations beyond the Traditional DG (TDG) formulation and propose a novel ClassWise DG (CWDG) benchmark, where for each class, we randomly select one of the domains and keep it aside for testing. Despite being exposed to all domains during training, CWDG is more challenging than TDG evaluation. We propose a novel iterative domain feature masking approach, achieving state-of-the-art results on the CWDG benchmark. Overall, while explaining these observations, our work furthers insights into the learning mechanisms of neural networks.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Ablation ; Algorithms ; Domains ; Machine learning ; Neural networks ; Optimization ; Training</subject><ispartof>arXiv.org, 2022-04</ispartof><rights>2022. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>780,784</link.rule.ids></links><search><creatorcontrib>Sivaprasad, Sarath</creatorcontrib><creatorcontrib>Goindani, Akshay</creatorcontrib><creatorcontrib>Garg, Vaibhav</creatorcontrib><creatorcontrib>Basu, Ritam</creatorcontrib><creatorcontrib>Kosgi, Saiteja</creatorcontrib><creatorcontrib>Gandhi, Vineet</creatorcontrib><title>Reappraising Domain Generalization in Neural Networks</title><title>arXiv.org</title><description>Given that Neural Networks generalize unreasonably well in the IID setting (with benign overfitting and betterment in performance with more parameters), OOD presents a consistent failure case to better the understanding of how they learn. This paper focuses on Domain Generalization (DG), which is perceived as the front face of OOD generalization. We find that the presence of multiple domains incentivizes domain agnostic learning and is the primary reason for generalization in Tradition DG. We show that the state-of-the-art results can be obtained by borrowing ideas from IID generalization and the DG tailored methods fail to add any performance gains. Furthermore, we perform explorations beyond the Traditional DG (TDG) formulation and propose a novel ClassWise DG (CWDG) benchmark, where for each class, we randomly select one of the domains and keep it aside for testing. Despite being exposed to all domains during training, CWDG is more challenging than TDG evaluation. We propose a novel iterative domain feature masking approach, achieving state-of-the-art results on the CWDG benchmark. Overall, while explaining these observations, our work furthers insights into the learning mechanisms of neural networks.</description><subject>Ablation</subject><subject>Algorithms</subject><subject>Domains</subject><subject>Machine learning</subject><subject>Neural networks</subject><subject>Optimization</subject><subject>Training</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNpjYuA0MjY21LUwMTLiYOAtLs4yMDAwMjM3MjU15mQwDUpNLCgoSswszsxLV3DJz03MzFNwT81LLUrMyaxKLMnMz1MAivillgIFgFRJeX5RdjEPA2taYk5xKi-U5mZQdnMNcfbQLSjKLyxNLS6Jz8ovLcoDSsUbmVoYWVhaGBgbGROnCgC0-TVM</recordid><startdate>20220428</startdate><enddate>20220428</enddate><creator>Sivaprasad, Sarath</creator><creator>Goindani, Akshay</creator><creator>Garg, Vaibhav</creator><creator>Basu, Ritam</creator><creator>Kosgi, Saiteja</creator><creator>Gandhi, Vineet</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20220428</creationdate><title>Reappraising Domain Generalization in Neural Networks</title><author>Sivaprasad, Sarath ; Goindani, Akshay ; Garg, Vaibhav ; Basu, Ritam ; Kosgi, Saiteja ; Gandhi, Vineet</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_25828980323</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Ablation</topic><topic>Algorithms</topic><topic>Domains</topic><topic>Machine learning</topic><topic>Neural networks</topic><topic>Optimization</topic><topic>Training</topic><toplevel>online_resources</toplevel><creatorcontrib>Sivaprasad, Sarath</creatorcontrib><creatorcontrib>Goindani, Akshay</creatorcontrib><creatorcontrib>Garg, Vaibhav</creatorcontrib><creatorcontrib>Basu, Ritam</creatorcontrib><creatorcontrib>Kosgi, Saiteja</creatorcontrib><creatorcontrib>Gandhi, Vineet</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Sivaprasad, Sarath</au><au>Goindani, Akshay</au><au>Garg, Vaibhav</au><au>Basu, Ritam</au><au>Kosgi, Saiteja</au><au>Gandhi, Vineet</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Reappraising Domain Generalization in Neural Networks</atitle><jtitle>arXiv.org</jtitle><date>2022-04-28</date><risdate>2022</risdate><eissn>2331-8422</eissn><abstract>Given that Neural Networks generalize unreasonably well in the IID setting (with benign overfitting and betterment in performance with more parameters), OOD presents a consistent failure case to better the understanding of how they learn. This paper focuses on Domain Generalization (DG), which is perceived as the front face of OOD generalization. We find that the presence of multiple domains incentivizes domain agnostic learning and is the primary reason for generalization in Tradition DG. We show that the state-of-the-art results can be obtained by borrowing ideas from IID generalization and the DG tailored methods fail to add any performance gains. Furthermore, we perform explorations beyond the Traditional DG (TDG) formulation and propose a novel ClassWise DG (CWDG) benchmark, where for each class, we randomly select one of the domains and keep it aside for testing. Despite being exposed to all domains during training, CWDG is more challenging than TDG evaluation. We propose a novel iterative domain feature masking approach, achieving state-of-the-art results on the CWDG benchmark. Overall, while explaining these observations, our work furthers insights into the learning mechanisms of neural networks.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2022-04
issn 2331-8422
language eng
recordid cdi_proquest_journals_2582898032
source Free E- Journals
subjects Ablation
Algorithms
Domains
Machine learning
Neural networks
Optimization
Training
title Reappraising Domain Generalization in Neural Networks
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-05T06%3A03%3A16IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Reappraising%20Domain%20Generalization%20in%20Neural%20Networks&rft.jtitle=arXiv.org&rft.au=Sivaprasad,%20Sarath&rft.date=2022-04-28&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2582898032%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2582898032&rft_id=info:pmid/&rfr_iscdi=true