Multidomain Adaptation With Sample and Source Distillation
Unsupervised multidomain adaptation attracts increasing attention as it delivers richer information when tackling a target task from an unlabeled target domain by leveraging the knowledge attained from labeled source domains. However, it is the quality of training samples, not just the quantity, tha...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on cybernetics 2024-04, Vol.54 (4), p.2193-2205 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 2205 |
---|---|
container_issue | 4 |
container_start_page | 2193 |
container_title | IEEE transactions on cybernetics |
container_volume | 54 |
creator | Li, Keqiuyin Lu, Jie Zuo, Hua Zhang, Guangquan |
description | Unsupervised multidomain adaptation attracts increasing attention as it delivers richer information when tackling a target task from an unlabeled target domain by leveraging the knowledge attained from labeled source domains. However, it is the quality of training samples, not just the quantity, that influences transfer performance. In this article, we propose a multidomain adaptation method with sample and source distillation (SSD), which develops a two-step selective strategy to distill source samples and define the importance of source domains. To distill samples, the pseudo-labeled target domain is constructed to learn a series of category classifiers to identify transfer and inefficient source samples. To rank domains, the agreements of accepting a target sample as the insider of source domains are estimated by constructing a domain discriminator based on selected transfer source samples. Using the selected samples and ranked domains, transfer from source domains to the target domain is achieved by adapting multilevel distributions in a latent feature space. Furthermore, to explore more usable target information which is expected to enhance the performance across domains of source predictors, an enhancement mechanism is built by matching selected pseudo-labeled and unlabeled target samples. The degrees of acceptance learned by the domain discriminator are finally employed as source merging weights to predict the target task. Superiority of the proposed SSD is validated on real-world visual classification tasks. |
doi_str_mv | 10.1109/TCYB.2023.3236008 |
format | Article |
fullrecord | <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_proquest_miscellaneous_2797148678</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10029939</ieee_id><sourcerecordid>2797148678</sourcerecordid><originalsourceid>FETCH-LOGICAL-c350t-41a3d50e0ec67fb2dc0ff6730a375a0dc2862ea4d8f6ee434229f7d43b2194023</originalsourceid><addsrcrecordid>eNpdkMtKAzEUhoMottQ-gCAy4MbN1JPLJBN3tV6h4qIVcRXSSQZT5uZkZuHbm9paxGwSwnd-_vMhdIphgjHIq-Xs_WZCgNAJJZQDpAdoSDBPY0JEcrh_czFAY-_XEE4avmR6jAZUAAmYGKLr577onKlL7apoanTT6c7VVfTmuo9oocumsJGuTLSo-zaz0a3znSuKH-YEHeW68Ha8u0fo9f5uOXuM5y8PT7PpPM5oAl3MsKYmAQs24yJfEZNBnnNBQVORaDAZSTmxmpk059YyygiRuTCMrgiWLOw3Qpfb3KatP3vrO1U6n9nQorJ17xURUmCWcpEG9OIfug69q9BOEcklMMBYBApvqaytvW9trprWlbr9UhjUxq3auFUbt2rnNsyc75L7VWnNfuLXZADOtoCz1v4JBCIllfQbOP17UQ</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2969040117</pqid></control><display><type>article</type><title>Multidomain Adaptation With Sample and Source Distillation</title><source>IEEE Electronic Library (IEL)</source><creator>Li, Keqiuyin ; Lu, Jie ; Zuo, Hua ; Zhang, Guangquan</creator><creatorcontrib>Li, Keqiuyin ; Lu, Jie ; Zuo, Hua ; Zhang, Guangquan</creatorcontrib><description>Unsupervised multidomain adaptation attracts increasing attention as it delivers richer information when tackling a target task from an unlabeled target domain by leveraging the knowledge attained from labeled source domains. However, it is the quality of training samples, not just the quantity, that influences transfer performance. In this article, we propose a multidomain adaptation method with sample and source distillation (SSD), which develops a two-step selective strategy to distill source samples and define the importance of source domains. To distill samples, the pseudo-labeled target domain is constructed to learn a series of category classifiers to identify transfer and inefficient source samples. To rank domains, the agreements of accepting a target sample as the insider of source domains are estimated by constructing a domain discriminator based on selected transfer source samples. Using the selected samples and ranked domains, transfer from source domains to the target domain is achieved by adapting multilevel distributions in a latent feature space. Furthermore, to explore more usable target information which is expected to enhance the performance across domains of source predictors, an enhancement mechanism is built by matching selected pseudo-labeled and unlabeled target samples. The degrees of acceptance learned by the domain discriminator are finally employed as source merging weights to predict the target task. Superiority of the proposed SSD is validated on real-world visual classification tasks.</description><identifier>ISSN: 2168-2267</identifier><identifier>EISSN: 2168-2275</identifier><identifier>DOI: 10.1109/TCYB.2023.3236008</identifier><identifier>PMID: 37022277</identifier><identifier>CODEN: ITCEB8</identifier><language>eng</language><publisher>United States: IEEE</publisher><subject>Adaptation ; Adaptation models ; Classification ; Classification algorithms ; Cybernetics ; Discriminators ; Distillation ; domain adaptation ; Feature extraction ; Loss measurement ; Machine learning ; Training ; Transfer learning ; Visual tasks</subject><ispartof>IEEE transactions on cybernetics, 2024-04, Vol.54 (4), p.2193-2205</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2024</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c350t-41a3d50e0ec67fb2dc0ff6730a375a0dc2862ea4d8f6ee434229f7d43b2194023</citedby><cites>FETCH-LOGICAL-c350t-41a3d50e0ec67fb2dc0ff6730a375a0dc2862ea4d8f6ee434229f7d43b2194023</cites><orcidid>0000-0002-9122-0775 ; 0000-0003-0690-4732 ; 0000-0003-3960-0583 ; 0000-0002-4676-5565</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10029939$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,776,780,792,27903,27904,54737</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/10029939$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/37022277$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Li, Keqiuyin</creatorcontrib><creatorcontrib>Lu, Jie</creatorcontrib><creatorcontrib>Zuo, Hua</creatorcontrib><creatorcontrib>Zhang, Guangquan</creatorcontrib><title>Multidomain Adaptation With Sample and Source Distillation</title><title>IEEE transactions on cybernetics</title><addtitle>TCYB</addtitle><addtitle>IEEE Trans Cybern</addtitle><description>Unsupervised multidomain adaptation attracts increasing attention as it delivers richer information when tackling a target task from an unlabeled target domain by leveraging the knowledge attained from labeled source domains. However, it is the quality of training samples, not just the quantity, that influences transfer performance. In this article, we propose a multidomain adaptation method with sample and source distillation (SSD), which develops a two-step selective strategy to distill source samples and define the importance of source domains. To distill samples, the pseudo-labeled target domain is constructed to learn a series of category classifiers to identify transfer and inefficient source samples. To rank domains, the agreements of accepting a target sample as the insider of source domains are estimated by constructing a domain discriminator based on selected transfer source samples. Using the selected samples and ranked domains, transfer from source domains to the target domain is achieved by adapting multilevel distributions in a latent feature space. Furthermore, to explore more usable target information which is expected to enhance the performance across domains of source predictors, an enhancement mechanism is built by matching selected pseudo-labeled and unlabeled target samples. The degrees of acceptance learned by the domain discriminator are finally employed as source merging weights to predict the target task. Superiority of the proposed SSD is validated on real-world visual classification tasks.</description><subject>Adaptation</subject><subject>Adaptation models</subject><subject>Classification</subject><subject>Classification algorithms</subject><subject>Cybernetics</subject><subject>Discriminators</subject><subject>Distillation</subject><subject>domain adaptation</subject><subject>Feature extraction</subject><subject>Loss measurement</subject><subject>Machine learning</subject><subject>Training</subject><subject>Transfer learning</subject><subject>Visual tasks</subject><issn>2168-2267</issn><issn>2168-2275</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNpdkMtKAzEUhoMottQ-gCAy4MbN1JPLJBN3tV6h4qIVcRXSSQZT5uZkZuHbm9paxGwSwnd-_vMhdIphgjHIq-Xs_WZCgNAJJZQDpAdoSDBPY0JEcrh_czFAY-_XEE4avmR6jAZUAAmYGKLr577onKlL7apoanTT6c7VVfTmuo9oocumsJGuTLSo-zaz0a3znSuKH-YEHeW68Ha8u0fo9f5uOXuM5y8PT7PpPM5oAl3MsKYmAQs24yJfEZNBnnNBQVORaDAZSTmxmpk059YyygiRuTCMrgiWLOw3Qpfb3KatP3vrO1U6n9nQorJ17xURUmCWcpEG9OIfug69q9BOEcklMMBYBApvqaytvW9trprWlbr9UhjUxq3auFUbt2rnNsyc75L7VWnNfuLXZADOtoCz1v4JBCIllfQbOP17UQ</recordid><startdate>20240401</startdate><enddate>20240401</enddate><creator>Li, Keqiuyin</creator><creator>Lu, Jie</creator><creator>Zuo, Hua</creator><creator>Zhang, Guangquan</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>7TB</scope><scope>8FD</scope><scope>F28</scope><scope>FR3</scope><scope>H8D</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0002-9122-0775</orcidid><orcidid>https://orcid.org/0000-0003-0690-4732</orcidid><orcidid>https://orcid.org/0000-0003-3960-0583</orcidid><orcidid>https://orcid.org/0000-0002-4676-5565</orcidid></search><sort><creationdate>20240401</creationdate><title>Multidomain Adaptation With Sample and Source Distillation</title><author>Li, Keqiuyin ; Lu, Jie ; Zuo, Hua ; Zhang, Guangquan</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c350t-41a3d50e0ec67fb2dc0ff6730a375a0dc2862ea4d8f6ee434229f7d43b2194023</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Adaptation</topic><topic>Adaptation models</topic><topic>Classification</topic><topic>Classification algorithms</topic><topic>Cybernetics</topic><topic>Discriminators</topic><topic>Distillation</topic><topic>domain adaptation</topic><topic>Feature extraction</topic><topic>Loss measurement</topic><topic>Machine learning</topic><topic>Training</topic><topic>Transfer learning</topic><topic>Visual tasks</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Li, Keqiuyin</creatorcontrib><creatorcontrib>Lu, Jie</creatorcontrib><creatorcontrib>Zuo, Hua</creatorcontrib><creatorcontrib>Zhang, Guangquan</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics & Communications Abstracts</collection><collection>Mechanical & Transportation Engineering Abstracts</collection><collection>Technology Research Database</collection><collection>ANTE: Abstracts in New Technology & Engineering</collection><collection>Engineering Research Database</collection><collection>Aerospace Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>MEDLINE - Academic</collection><jtitle>IEEE transactions on cybernetics</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Li, Keqiuyin</au><au>Lu, Jie</au><au>Zuo, Hua</au><au>Zhang, Guangquan</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Multidomain Adaptation With Sample and Source Distillation</atitle><jtitle>IEEE transactions on cybernetics</jtitle><stitle>TCYB</stitle><addtitle>IEEE Trans Cybern</addtitle><date>2024-04-01</date><risdate>2024</risdate><volume>54</volume><issue>4</issue><spage>2193</spage><epage>2205</epage><pages>2193-2205</pages><issn>2168-2267</issn><eissn>2168-2275</eissn><coden>ITCEB8</coden><abstract>Unsupervised multidomain adaptation attracts increasing attention as it delivers richer information when tackling a target task from an unlabeled target domain by leveraging the knowledge attained from labeled source domains. However, it is the quality of training samples, not just the quantity, that influences transfer performance. In this article, we propose a multidomain adaptation method with sample and source distillation (SSD), which develops a two-step selective strategy to distill source samples and define the importance of source domains. To distill samples, the pseudo-labeled target domain is constructed to learn a series of category classifiers to identify transfer and inefficient source samples. To rank domains, the agreements of accepting a target sample as the insider of source domains are estimated by constructing a domain discriminator based on selected transfer source samples. Using the selected samples and ranked domains, transfer from source domains to the target domain is achieved by adapting multilevel distributions in a latent feature space. Furthermore, to explore more usable target information which is expected to enhance the performance across domains of source predictors, an enhancement mechanism is built by matching selected pseudo-labeled and unlabeled target samples. The degrees of acceptance learned by the domain discriminator are finally employed as source merging weights to predict the target task. Superiority of the proposed SSD is validated on real-world visual classification tasks.</abstract><cop>United States</cop><pub>IEEE</pub><pmid>37022277</pmid><doi>10.1109/TCYB.2023.3236008</doi><tpages>13</tpages><orcidid>https://orcid.org/0000-0002-9122-0775</orcidid><orcidid>https://orcid.org/0000-0003-0690-4732</orcidid><orcidid>https://orcid.org/0000-0003-3960-0583</orcidid><orcidid>https://orcid.org/0000-0002-4676-5565</orcidid></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | ISSN: 2168-2267 |
ispartof | IEEE transactions on cybernetics, 2024-04, Vol.54 (4), p.2193-2205 |
issn | 2168-2267 2168-2275 |
language | eng |
recordid | cdi_proquest_miscellaneous_2797148678 |
source | IEEE Electronic Library (IEL) |
subjects | Adaptation Adaptation models Classification Classification algorithms Cybernetics Discriminators Distillation domain adaptation Feature extraction Loss measurement Machine learning Training Transfer learning Visual tasks |
title | Multidomain Adaptation With Sample and Source Distillation |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-23T01%3A32%3A38IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Multidomain%20Adaptation%20With%20Sample%20and%20Source%20Distillation&rft.jtitle=IEEE%20transactions%20on%20cybernetics&rft.au=Li,%20Keqiuyin&rft.date=2024-04-01&rft.volume=54&rft.issue=4&rft.spage=2193&rft.epage=2205&rft.pages=2193-2205&rft.issn=2168-2267&rft.eissn=2168-2275&rft.coden=ITCEB8&rft_id=info:doi/10.1109/TCYB.2023.3236008&rft_dat=%3Cproquest_RIE%3E2797148678%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2969040117&rft_id=info:pmid/37022277&rft_ieee_id=10029939&rfr_iscdi=true |