Active Dynamic Weighting for multi-domain adaptation

Multi-source unsupervised domain adaptation aims to transfer knowledge from multiple labeled source domains to an unlabeled target domain. Existing methods either seek a mixture of distributions across various domains or combine multiple single-source models for weighted fusion in the decision proce...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Neural networks 2024-09, Vol.177, p.106398, Article 106398
Hauptverfasser: Liu, Long, Zhou, Bo, Zhao, Zhipeng, Liu, Zening
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page 106398
container_title Neural networks
container_volume 177
creator Liu, Long
Zhou, Bo
Zhao, Zhipeng
Liu, Zening
description Multi-source unsupervised domain adaptation aims to transfer knowledge from multiple labeled source domains to an unlabeled target domain. Existing methods either seek a mixture of distributions across various domains or combine multiple single-source models for weighted fusion in the decision process, with little insight into the distributional discrepancy between different source domains and the target domain. Considering the discrepancies in global and local feature distributions between different domains and the complexity of obtaining category boundaries across domains, this paper proposes a novel Active Dynamic Weighting (ADW) for multi-source domain adaptation. Specifically, to effectively utilize the locally advantageous features in the source domains, ADW designs a multi-source dynamic adjustment mechanism during the training process to dynamically control the degree of feature alignment between each source and target domain in the training batch. In addition, to ensure the cross-domain categories can be distinguished, ADW devises a dynamic boundary loss to guide the model to focus on the hard samples near the decision boundary, which enhances the clarity of the decision boundary and improves the model’s classification ability. Meanwhile, ADW applies active learning to multi-source unsupervised domain adaptation for the first time, guided by dynamic boundary loss, proposes an efficient importance sampling strategy to select target domain hard samples to annotate at a minimal annotation budget, integrates it into the training process, and further refines the domain alignment at the category level. Experiments on various benchmark datasets consistently demonstrate the superiority of our method.
doi_str_mv 10.1016/j.neunet.2024.106398
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_miscellaneous_3061781399</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><els_id>S0893608024003228</els_id><sourcerecordid>3061781399</sourcerecordid><originalsourceid>FETCH-LOGICAL-c311t-33c13fd79c3c28956c22f8686edc348683efa8e5a960812d27054a73efa6e5d13</originalsourceid><addsrcrecordid>eNp9kEtLAzEUhYMotlb_gcgs3UzNYyaTbITiGwpuFJchJndqykymJplC_71Tprp0dS-Hc-7jQ-iS4DnBhN-s5x56D2lOMS0GiTMpjtCUiErmtBL0GE2xkCznWOAJOotxjTHmomCnaMKEwGUl-RQVC5PcFrL7ndetM9kHuNVXcn6V1V3I2r5JLrddq53PtNWbpJPr_Dk6qXUT4eJQZ-j98eHt7jlfvj693C2WuWGEpJwxQ1htK2mYoUKW3FBaCy44WMOKoWFQawGllsONhFpa4bLQ1V7lUFrCZuh6nLsJ3XcPManWRQNNoz10fVQMc1IJwqQcrMVoNaGLMUCtNsG1OuwUwWrPS63VyEvteamR1xC7OmzoP1uwf6FfQIPhdjTA8OfWQVDROPAGrAtgkrKd-3_DD5ldfKI</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3061781399</pqid></control><display><type>article</type><title>Active Dynamic Weighting for multi-domain adaptation</title><source>MEDLINE</source><source>Elsevier ScienceDirect Journals Complete</source><creator>Liu, Long ; Zhou, Bo ; Zhao, Zhipeng ; Liu, Zening</creator><creatorcontrib>Liu, Long ; Zhou, Bo ; Zhao, Zhipeng ; Liu, Zening</creatorcontrib><description>Multi-source unsupervised domain adaptation aims to transfer knowledge from multiple labeled source domains to an unlabeled target domain. Existing methods either seek a mixture of distributions across various domains or combine multiple single-source models for weighted fusion in the decision process, with little insight into the distributional discrepancy between different source domains and the target domain. Considering the discrepancies in global and local feature distributions between different domains and the complexity of obtaining category boundaries across domains, this paper proposes a novel Active Dynamic Weighting (ADW) for multi-source domain adaptation. Specifically, to effectively utilize the locally advantageous features in the source domains, ADW designs a multi-source dynamic adjustment mechanism during the training process to dynamically control the degree of feature alignment between each source and target domain in the training batch. In addition, to ensure the cross-domain categories can be distinguished, ADW devises a dynamic boundary loss to guide the model to focus on the hard samples near the decision boundary, which enhances the clarity of the decision boundary and improves the model’s classification ability. Meanwhile, ADW applies active learning to multi-source unsupervised domain adaptation for the first time, guided by dynamic boundary loss, proposes an efficient importance sampling strategy to select target domain hard samples to annotate at a minimal annotation budget, integrates it into the training process, and further refines the domain alignment at the category level. Experiments on various benchmark datasets consistently demonstrate the superiority of our method.</description><identifier>ISSN: 0893-6080</identifier><identifier>ISSN: 1879-2782</identifier><identifier>EISSN: 1879-2782</identifier><identifier>DOI: 10.1016/j.neunet.2024.106398</identifier><identifier>PMID: 38805796</identifier><language>eng</language><publisher>United States: Elsevier Ltd</publisher><subject>Active learning ; Algorithms ; Distribution alignment ; Domain adaptation ; Humans ; Neural Networks, Computer ; Transfer learning ; Unsupervised Machine Learning</subject><ispartof>Neural networks, 2024-09, Vol.177, p.106398, Article 106398</ispartof><rights>2024 Elsevier Ltd</rights><rights>Copyright © 2024 Elsevier Ltd. All rights reserved.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c311t-33c13fd79c3c28956c22f8686edc348683efa8e5a960812d27054a73efa6e5d13</cites><orcidid>0009-0008-9721-123X</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://dx.doi.org/10.1016/j.neunet.2024.106398$$EHTML$$P50$$Gelsevier$$H</linktohtml><link.rule.ids>314,780,784,3550,27924,27925,45995</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/38805796$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Liu, Long</creatorcontrib><creatorcontrib>Zhou, Bo</creatorcontrib><creatorcontrib>Zhao, Zhipeng</creatorcontrib><creatorcontrib>Liu, Zening</creatorcontrib><title>Active Dynamic Weighting for multi-domain adaptation</title><title>Neural networks</title><addtitle>Neural Netw</addtitle><description>Multi-source unsupervised domain adaptation aims to transfer knowledge from multiple labeled source domains to an unlabeled target domain. Existing methods either seek a mixture of distributions across various domains or combine multiple single-source models for weighted fusion in the decision process, with little insight into the distributional discrepancy between different source domains and the target domain. Considering the discrepancies in global and local feature distributions between different domains and the complexity of obtaining category boundaries across domains, this paper proposes a novel Active Dynamic Weighting (ADW) for multi-source domain adaptation. Specifically, to effectively utilize the locally advantageous features in the source domains, ADW designs a multi-source dynamic adjustment mechanism during the training process to dynamically control the degree of feature alignment between each source and target domain in the training batch. In addition, to ensure the cross-domain categories can be distinguished, ADW devises a dynamic boundary loss to guide the model to focus on the hard samples near the decision boundary, which enhances the clarity of the decision boundary and improves the model’s classification ability. Meanwhile, ADW applies active learning to multi-source unsupervised domain adaptation for the first time, guided by dynamic boundary loss, proposes an efficient importance sampling strategy to select target domain hard samples to annotate at a minimal annotation budget, integrates it into the training process, and further refines the domain alignment at the category level. Experiments on various benchmark datasets consistently demonstrate the superiority of our method.</description><subject>Active learning</subject><subject>Algorithms</subject><subject>Distribution alignment</subject><subject>Domain adaptation</subject><subject>Humans</subject><subject>Neural Networks, Computer</subject><subject>Transfer learning</subject><subject>Unsupervised Machine Learning</subject><issn>0893-6080</issn><issn>1879-2782</issn><issn>1879-2782</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>EIF</sourceid><recordid>eNp9kEtLAzEUhYMotlb_gcgs3UzNYyaTbITiGwpuFJchJndqykymJplC_71Tprp0dS-Hc-7jQ-iS4DnBhN-s5x56D2lOMS0GiTMpjtCUiErmtBL0GE2xkCznWOAJOotxjTHmomCnaMKEwGUl-RQVC5PcFrL7ndetM9kHuNVXcn6V1V3I2r5JLrddq53PtNWbpJPr_Dk6qXUT4eJQZ-j98eHt7jlfvj693C2WuWGEpJwxQ1htK2mYoUKW3FBaCy44WMOKoWFQawGllsONhFpa4bLQ1V7lUFrCZuh6nLsJ3XcPManWRQNNoz10fVQMc1IJwqQcrMVoNaGLMUCtNsG1OuwUwWrPS63VyEvteamR1xC7OmzoP1uwf6FfQIPhdjTA8OfWQVDROPAGrAtgkrKd-3_DD5ldfKI</recordid><startdate>202409</startdate><enddate>202409</enddate><creator>Liu, Long</creator><creator>Zhou, Bo</creator><creator>Zhao, Zhipeng</creator><creator>Liu, Zening</creator><general>Elsevier Ltd</general><scope>CGR</scope><scope>CUY</scope><scope>CVF</scope><scope>ECM</scope><scope>EIF</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7X8</scope><orcidid>https://orcid.org/0009-0008-9721-123X</orcidid></search><sort><creationdate>202409</creationdate><title>Active Dynamic Weighting for multi-domain adaptation</title><author>Liu, Long ; Zhou, Bo ; Zhao, Zhipeng ; Liu, Zening</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c311t-33c13fd79c3c28956c22f8686edc348683efa8e5a960812d27054a73efa6e5d13</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Active learning</topic><topic>Algorithms</topic><topic>Distribution alignment</topic><topic>Domain adaptation</topic><topic>Humans</topic><topic>Neural Networks, Computer</topic><topic>Transfer learning</topic><topic>Unsupervised Machine Learning</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Liu, Long</creatorcontrib><creatorcontrib>Zhou, Bo</creatorcontrib><creatorcontrib>Zhao, Zhipeng</creatorcontrib><creatorcontrib>Liu, Zening</creatorcontrib><collection>Medline</collection><collection>MEDLINE</collection><collection>MEDLINE (Ovid)</collection><collection>MEDLINE</collection><collection>MEDLINE</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>MEDLINE - Academic</collection><jtitle>Neural networks</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Liu, Long</au><au>Zhou, Bo</au><au>Zhao, Zhipeng</au><au>Liu, Zening</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Active Dynamic Weighting for multi-domain adaptation</atitle><jtitle>Neural networks</jtitle><addtitle>Neural Netw</addtitle><date>2024-09</date><risdate>2024</risdate><volume>177</volume><spage>106398</spage><pages>106398-</pages><artnum>106398</artnum><issn>0893-6080</issn><issn>1879-2782</issn><eissn>1879-2782</eissn><abstract>Multi-source unsupervised domain adaptation aims to transfer knowledge from multiple labeled source domains to an unlabeled target domain. Existing methods either seek a mixture of distributions across various domains or combine multiple single-source models for weighted fusion in the decision process, with little insight into the distributional discrepancy between different source domains and the target domain. Considering the discrepancies in global and local feature distributions between different domains and the complexity of obtaining category boundaries across domains, this paper proposes a novel Active Dynamic Weighting (ADW) for multi-source domain adaptation. Specifically, to effectively utilize the locally advantageous features in the source domains, ADW designs a multi-source dynamic adjustment mechanism during the training process to dynamically control the degree of feature alignment between each source and target domain in the training batch. In addition, to ensure the cross-domain categories can be distinguished, ADW devises a dynamic boundary loss to guide the model to focus on the hard samples near the decision boundary, which enhances the clarity of the decision boundary and improves the model’s classification ability. Meanwhile, ADW applies active learning to multi-source unsupervised domain adaptation for the first time, guided by dynamic boundary loss, proposes an efficient importance sampling strategy to select target domain hard samples to annotate at a minimal annotation budget, integrates it into the training process, and further refines the domain alignment at the category level. Experiments on various benchmark datasets consistently demonstrate the superiority of our method.</abstract><cop>United States</cop><pub>Elsevier Ltd</pub><pmid>38805796</pmid><doi>10.1016/j.neunet.2024.106398</doi><orcidid>https://orcid.org/0009-0008-9721-123X</orcidid></addata></record>
fulltext fulltext
identifier ISSN: 0893-6080
ispartof Neural networks, 2024-09, Vol.177, p.106398, Article 106398
issn 0893-6080
1879-2782
1879-2782
language eng
recordid cdi_proquest_miscellaneous_3061781399
source MEDLINE; Elsevier ScienceDirect Journals Complete
subjects Active learning
Algorithms
Distribution alignment
Domain adaptation
Humans
Neural Networks, Computer
Transfer learning
Unsupervised Machine Learning
title Active Dynamic Weighting for multi-domain adaptation
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-04T17%3A09%3A37IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Active%20Dynamic%20Weighting%20for%20multi-domain%20adaptation&rft.jtitle=Neural%20networks&rft.au=Liu,%20Long&rft.date=2024-09&rft.volume=177&rft.spage=106398&rft.pages=106398-&rft.artnum=106398&rft.issn=0893-6080&rft.eissn=1879-2782&rft_id=info:doi/10.1016/j.neunet.2024.106398&rft_dat=%3Cproquest_cross%3E3061781399%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3061781399&rft_id=info:pmid/38805796&rft_els_id=S0893608024003228&rfr_iscdi=true