Dynamic Instance Domain Adaptation

Most existing studies on unsupervised domain adaptation (UDA) assume that each domain's training samples come with domain labels (e.g., painting, photo). Samples from each domain are assumed to follow the same distribution and the domain labels are exploited to learn domain-invariant features v...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on image processing 2022, Vol.31, p.4585-4597
Hauptverfasser: Deng, Zhongying, Zhou, Kaiyang, Li, Da, He, Junjun, Song, Yi-Zhe, Xiang, Tao
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 4597
container_issue
container_start_page 4585
container_title IEEE transactions on image processing
container_volume 31
creator Deng, Zhongying
Zhou, Kaiyang
Li, Da
He, Junjun
Song, Yi-Zhe
Xiang, Tao
description Most existing studies on unsupervised domain adaptation (UDA) assume that each domain's training samples come with domain labels (e.g., painting, photo). Samples from each domain are assumed to follow the same distribution and the domain labels are exploited to learn domain-invariant features via feature alignment. However, such an assumption often does not hold true-there often exist numerous finer-grained domains (e.g., dozens of modern painting styles have been developed, each differing dramatically from those of the classic styles). Therefore, forcing feature distribution alignment across each artificially-defined and coarse-grained domain can be ineffective. In this paper, we address both single-source and multi-source UDA from a completely different perspective, which is to view each instance as a fine domain . Feature alignment across domains is thus redundant. Instead, we propose to perform dynamic instance domain adaptation (DIDA). Concretely, a dynamic neural network with adaptive convolutional kernels is developed to generate instance-adaptive residuals to adapt domain-agnostic deep features to each individual instance. This enables a shared classifier to be applied to both source and target domain data without relying on any domain annotation. Further, instead of imposing intricate feature alignment losses, we adopt a simple semi-supervised learning paradigm using only a cross-entropy loss for both labeled source and pseudo labeled target data. Our model, dubbed DIDA-Net, achieves state-of-the-art performance on several commonly used single-source and multi-source UDA datasets including Digits, Office-Home, DomainNet, Digit-Five, and PACS.
doi_str_mv 10.1109/TIP.2022.3186531
format Article
fullrecord <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_ieee_primary_9813442</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9813442</ieee_id><sourcerecordid>2684100181</sourcerecordid><originalsourceid>FETCH-LOGICAL-c324t-392a27e45117f0736522e04a6beb2405dc59f3feed67468049ed9418cdc5b63f3</originalsourceid><addsrcrecordid>eNpdkL1PAkEQxTdGI4j2JjZEG5vDmf2-koAfJCRaYL1Z7uaSI9we3h4F_71LIBZWM8n83sybx9g9wgQR8pfV4mvCgfOJQKuVwAs2xFxiBiD5ZepBmcygzAfsJsYNAEqF-poNhDJGW4Qhe5wfgm_qYrwIsfehoPG8bXwdxtPS73rf1224ZVeV30a6O9cR-357Xc0-suXn-2I2XWaF4LLPRM49N5QuoKnACK04J5Ber2nNJaiyUHklKqJSG6ktyJzK5NUWabDWohIj9nzau-vanz3F3jV1LGi79YHafXRcW4npB4sJffqHbtp9F5K7I2UNCKN0ouBEFV0bY0eV23V147uDQ3DH_FzKzx3zc-f8kuThJKmJ6A_P00kpufgFp9Zmjg</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2688703756</pqid></control><display><type>article</type><title>Dynamic Instance Domain Adaptation</title><source>IEEE Electronic Library (IEL)</source><creator>Deng, Zhongying ; Zhou, Kaiyang ; Li, Da ; He, Junjun ; Song, Yi-Zhe ; Xiang, Tao</creator><creatorcontrib>Deng, Zhongying ; Zhou, Kaiyang ; Li, Da ; He, Junjun ; Song, Yi-Zhe ; Xiang, Tao</creatorcontrib><description>Most existing studies on unsupervised domain adaptation (UDA) assume that each domain's training samples come with domain labels (e.g., painting, photo). Samples from each domain are assumed to follow the same distribution and the domain labels are exploited to learn domain-invariant features via feature alignment. However, such an assumption often does not hold true-there often exist numerous finer-grained domains (e.g., dozens of modern painting styles have been developed, each differing dramatically from those of the classic styles). Therefore, forcing feature distribution alignment across each artificially-defined and coarse-grained domain can be ineffective. In this paper, we address both single-source and multi-source UDA from a completely different perspective, which is to view each instance as a fine domain . Feature alignment across domains is thus redundant. Instead, we propose to perform dynamic instance domain adaptation (DIDA). Concretely, a dynamic neural network with adaptive convolutional kernels is developed to generate instance-adaptive residuals to adapt domain-agnostic deep features to each individual instance. This enables a shared classifier to be applied to both source and target domain data without relying on any domain annotation. Further, instead of imposing intricate feature alignment losses, we adopt a simple semi-supervised learning paradigm using only a cross-entropy loss for both labeled source and pseudo labeled target data. Our model, dubbed DIDA-Net, achieves state-of-the-art performance on several commonly used single-source and multi-source UDA datasets including Digits, Office-Home, DomainNet, Digit-Five, and PACS.</description><identifier>ISSN: 1057-7149</identifier><identifier>EISSN: 1941-0042</identifier><identifier>DOI: 10.1109/TIP.2022.3186531</identifier><identifier>PMID: 35776810</identifier><identifier>CODEN: IIPRE4</identifier><language>eng</language><publisher>New York: IEEE</publisher><subject>Adaptation ; Adaptation models ; Alignment ; Annotations ; Convolutional neural networks ; Domains ; dynamic instance domain adaptation ; Feature extraction ; Kernel ; Labels ; multi-source domain adaptation ; Neural networks ; Picture archiving and communication systems ; single-source domain adaptation ; Unsupervised domain adaptation</subject><ispartof>IEEE transactions on image processing, 2022, Vol.31, p.4585-4597</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c324t-392a27e45117f0736522e04a6beb2405dc59f3feed67468049ed9418cdc5b63f3</citedby><cites>FETCH-LOGICAL-c324t-392a27e45117f0736522e04a6beb2405dc59f3feed67468049ed9418cdc5b63f3</cites><orcidid>0000-0002-8153-3903 ; 0000-0002-1813-1784 ; 0000-0001-5908-3275 ; 0000-0003-0887-7408 ; 0000-0002-2101-2989 ; 0000-0002-2530-1059</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9813442$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,777,781,793,4010,27904,27905,27906,54739</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/9813442$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Deng, Zhongying</creatorcontrib><creatorcontrib>Zhou, Kaiyang</creatorcontrib><creatorcontrib>Li, Da</creatorcontrib><creatorcontrib>He, Junjun</creatorcontrib><creatorcontrib>Song, Yi-Zhe</creatorcontrib><creatorcontrib>Xiang, Tao</creatorcontrib><title>Dynamic Instance Domain Adaptation</title><title>IEEE transactions on image processing</title><addtitle>TIP</addtitle><description>Most existing studies on unsupervised domain adaptation (UDA) assume that each domain's training samples come with domain labels (e.g., painting, photo). Samples from each domain are assumed to follow the same distribution and the domain labels are exploited to learn domain-invariant features via feature alignment. However, such an assumption often does not hold true-there often exist numerous finer-grained domains (e.g., dozens of modern painting styles have been developed, each differing dramatically from those of the classic styles). Therefore, forcing feature distribution alignment across each artificially-defined and coarse-grained domain can be ineffective. In this paper, we address both single-source and multi-source UDA from a completely different perspective, which is to view each instance as a fine domain . Feature alignment across domains is thus redundant. Instead, we propose to perform dynamic instance domain adaptation (DIDA). Concretely, a dynamic neural network with adaptive convolutional kernels is developed to generate instance-adaptive residuals to adapt domain-agnostic deep features to each individual instance. This enables a shared classifier to be applied to both source and target domain data without relying on any domain annotation. Further, instead of imposing intricate feature alignment losses, we adopt a simple semi-supervised learning paradigm using only a cross-entropy loss for both labeled source and pseudo labeled target data. Our model, dubbed DIDA-Net, achieves state-of-the-art performance on several commonly used single-source and multi-source UDA datasets including Digits, Office-Home, DomainNet, Digit-Five, and PACS.</description><subject>Adaptation</subject><subject>Adaptation models</subject><subject>Alignment</subject><subject>Annotations</subject><subject>Convolutional neural networks</subject><subject>Domains</subject><subject>dynamic instance domain adaptation</subject><subject>Feature extraction</subject><subject>Kernel</subject><subject>Labels</subject><subject>multi-source domain adaptation</subject><subject>Neural networks</subject><subject>Picture archiving and communication systems</subject><subject>single-source domain adaptation</subject><subject>Unsupervised domain adaptation</subject><issn>1057-7149</issn><issn>1941-0042</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNpdkL1PAkEQxTdGI4j2JjZEG5vDmf2-koAfJCRaYL1Z7uaSI9we3h4F_71LIBZWM8n83sybx9g9wgQR8pfV4mvCgfOJQKuVwAs2xFxiBiD5ZepBmcygzAfsJsYNAEqF-poNhDJGW4Qhe5wfgm_qYrwIsfehoPG8bXwdxtPS73rf1224ZVeV30a6O9cR-357Xc0-suXn-2I2XWaF4LLPRM49N5QuoKnACK04J5Ber2nNJaiyUHklKqJSG6ktyJzK5NUWabDWohIj9nzau-vanz3F3jV1LGi79YHafXRcW4npB4sJffqHbtp9F5K7I2UNCKN0ouBEFV0bY0eV23V147uDQ3DH_FzKzx3zc-f8kuThJKmJ6A_P00kpufgFp9Zmjg</recordid><startdate>2022</startdate><enddate>2022</enddate><creator>Deng, Zhongying</creator><creator>Zhou, Kaiyang</creator><creator>Li, Da</creator><creator>He, Junjun</creator><creator>Song, Yi-Zhe</creator><creator>Xiang, Tao</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0002-8153-3903</orcidid><orcidid>https://orcid.org/0000-0002-1813-1784</orcidid><orcidid>https://orcid.org/0000-0001-5908-3275</orcidid><orcidid>https://orcid.org/0000-0003-0887-7408</orcidid><orcidid>https://orcid.org/0000-0002-2101-2989</orcidid><orcidid>https://orcid.org/0000-0002-2530-1059</orcidid></search><sort><creationdate>2022</creationdate><title>Dynamic Instance Domain Adaptation</title><author>Deng, Zhongying ; Zhou, Kaiyang ; Li, Da ; He, Junjun ; Song, Yi-Zhe ; Xiang, Tao</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c324t-392a27e45117f0736522e04a6beb2405dc59f3feed67468049ed9418cdc5b63f3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Adaptation</topic><topic>Adaptation models</topic><topic>Alignment</topic><topic>Annotations</topic><topic>Convolutional neural networks</topic><topic>Domains</topic><topic>dynamic instance domain adaptation</topic><topic>Feature extraction</topic><topic>Kernel</topic><topic>Labels</topic><topic>multi-source domain adaptation</topic><topic>Neural networks</topic><topic>Picture archiving and communication systems</topic><topic>single-source domain adaptation</topic><topic>Unsupervised domain adaptation</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Deng, Zhongying</creatorcontrib><creatorcontrib>Zhou, Kaiyang</creatorcontrib><creatorcontrib>Li, Da</creatorcontrib><creatorcontrib>He, Junjun</creatorcontrib><creatorcontrib>Song, Yi-Zhe</creatorcontrib><creatorcontrib>Xiang, Tao</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>MEDLINE - Academic</collection><jtitle>IEEE transactions on image processing</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Deng, Zhongying</au><au>Zhou, Kaiyang</au><au>Li, Da</au><au>He, Junjun</au><au>Song, Yi-Zhe</au><au>Xiang, Tao</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Dynamic Instance Domain Adaptation</atitle><jtitle>IEEE transactions on image processing</jtitle><stitle>TIP</stitle><date>2022</date><risdate>2022</risdate><volume>31</volume><spage>4585</spage><epage>4597</epage><pages>4585-4597</pages><issn>1057-7149</issn><eissn>1941-0042</eissn><coden>IIPRE4</coden><abstract>Most existing studies on unsupervised domain adaptation (UDA) assume that each domain's training samples come with domain labels (e.g., painting, photo). Samples from each domain are assumed to follow the same distribution and the domain labels are exploited to learn domain-invariant features via feature alignment. However, such an assumption often does not hold true-there often exist numerous finer-grained domains (e.g., dozens of modern painting styles have been developed, each differing dramatically from those of the classic styles). Therefore, forcing feature distribution alignment across each artificially-defined and coarse-grained domain can be ineffective. In this paper, we address both single-source and multi-source UDA from a completely different perspective, which is to view each instance as a fine domain . Feature alignment across domains is thus redundant. Instead, we propose to perform dynamic instance domain adaptation (DIDA). Concretely, a dynamic neural network with adaptive convolutional kernels is developed to generate instance-adaptive residuals to adapt domain-agnostic deep features to each individual instance. This enables a shared classifier to be applied to both source and target domain data without relying on any domain annotation. Further, instead of imposing intricate feature alignment losses, we adopt a simple semi-supervised learning paradigm using only a cross-entropy loss for both labeled source and pseudo labeled target data. Our model, dubbed DIDA-Net, achieves state-of-the-art performance on several commonly used single-source and multi-source UDA datasets including Digits, Office-Home, DomainNet, Digit-Five, and PACS.</abstract><cop>New York</cop><pub>IEEE</pub><pmid>35776810</pmid><doi>10.1109/TIP.2022.3186531</doi><tpages>13</tpages><orcidid>https://orcid.org/0000-0002-8153-3903</orcidid><orcidid>https://orcid.org/0000-0002-1813-1784</orcidid><orcidid>https://orcid.org/0000-0001-5908-3275</orcidid><orcidid>https://orcid.org/0000-0003-0887-7408</orcidid><orcidid>https://orcid.org/0000-0002-2101-2989</orcidid><orcidid>https://orcid.org/0000-0002-2530-1059</orcidid></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 1057-7149
ispartof IEEE transactions on image processing, 2022, Vol.31, p.4585-4597
issn 1057-7149
1941-0042
language eng
recordid cdi_ieee_primary_9813442
source IEEE Electronic Library (IEL)
subjects Adaptation
Adaptation models
Alignment
Annotations
Convolutional neural networks
Domains
dynamic instance domain adaptation
Feature extraction
Kernel
Labels
multi-source domain adaptation
Neural networks
Picture archiving and communication systems
single-source domain adaptation
Unsupervised domain adaptation
title Dynamic Instance Domain Adaptation
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-20T05%3A14%3A56IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Dynamic%20Instance%20Domain%20Adaptation&rft.jtitle=IEEE%20transactions%20on%20image%20processing&rft.au=Deng,%20Zhongying&rft.date=2022&rft.volume=31&rft.spage=4585&rft.epage=4597&rft.pages=4585-4597&rft.issn=1057-7149&rft.eissn=1941-0042&rft.coden=IIPRE4&rft_id=info:doi/10.1109/TIP.2022.3186531&rft_dat=%3Cproquest_RIE%3E2684100181%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2688703756&rft_id=info:pmid/35776810&rft_ieee_id=9813442&rfr_iscdi=true