Learning Target Domain Specific Classifier for Partial Domain Adaptation
Unsupervised domain adaptation~(UDA) aims at reducing the distribution discrepancy when transferring knowledge from a labeled source domain to an unlabeled target domain. Previous UDA methods assume that the source and target domains share an identical label space, which is unrealistic in practice s...
Gespeichert in:
Veröffentlicht in: | arXiv.org 2020-08 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Chuan-Xian Ren Ge, Pengfei Yang, Peiyi Shuicheng Yan |
description | Unsupervised domain adaptation~(UDA) aims at reducing the distribution discrepancy when transferring knowledge from a labeled source domain to an unlabeled target domain. Previous UDA methods assume that the source and target domains share an identical label space, which is unrealistic in practice since the label information of the target domain is agnostic. This paper focuses on a more realistic UDA scenario, i.e. partial domain adaptation (PDA), where the target label space is subsumed to the source label space. In the PDA scenario, the source outliers that are absent in the target domain may be wrongly matched to the target domain (technically named negative transfer), leading to performance degradation of UDA methods. This paper proposes a novel Target Domain Specific Classifier Learning-based Domain Adaptation (TSCDA) method. TSCDA presents a soft-weighed maximum mean discrepancy criterion to partially align feature distributions and alleviate negative transfer. Also, it learns a target-specific classifier for the target domain with pseudo-labels and multiple auxiliary classifiers, to further address classifier shift. A module named Peers Assisted Learning is used to minimize the prediction difference between multiple target-specific classifiers, which makes the classifiers more discriminant for the target domain. Extensive experiments conducted on three PDA benchmark datasets show that TSCDA outperforms other state-of-the-art methods with a large margin, e.g. \(4\%\) and \(5.6\%\) averagely on Office-31 and Office-Home, respectively. |
doi_str_mv | 10.48550/arxiv.2008.10785 |
format | Article |
fullrecord | <record><control><sourceid>proquest_arxiv</sourceid><recordid>TN_cdi_arxiv_primary_2008_10785</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2437281331</sourcerecordid><originalsourceid>FETCH-LOGICAL-a521-f16eca18f22ed82dddd532292c81b46cb2c5f636c0a61dc1c1925fd88c5a26dd3</originalsourceid><addsrcrecordid>eNo1kE1Lw0AQhhdBsNT-AE8ueE7cnc1utscSPyoUFMw9TPejbEmTuJuK_ntjq3N55_AwvM8QcsNZXmgp2T3Gr_CZA2M656zU8oLMQAie6QLgiixS2jPGQJUgpZiR9cZh7EK3ozXGnRvpQ3_A0NH3wZngg6FViylNm4vU95G-YRwDtv_YyuIw4hj67ppcemyTW_zlnNRPj3W1zjavzy_VapOhBJ55rpxBrj2AsxrsNFIALMFovi2U2YKRXgllGCpuDTd8CdJbrY1EUNaKObk9nz1pNkMMB4zfza9uc9KdiLszMcT-4-jS2Oz7Y-ymTg0UogTNp2-IHw5TV-M</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2437281331</pqid></control><display><type>article</type><title>Learning Target Domain Specific Classifier for Partial Domain Adaptation</title><source>Freely Accessible Journals</source><source>arXiv.org</source><creator>Chuan-Xian Ren ; Ge, Pengfei ; Yang, Peiyi ; Shuicheng Yan</creator><creatorcontrib>Chuan-Xian Ren ; Ge, Pengfei ; Yang, Peiyi ; Shuicheng Yan</creatorcontrib><description>Unsupervised domain adaptation~(UDA) aims at reducing the distribution discrepancy when transferring knowledge from a labeled source domain to an unlabeled target domain. Previous UDA methods assume that the source and target domains share an identical label space, which is unrealistic in practice since the label information of the target domain is agnostic. This paper focuses on a more realistic UDA scenario, i.e. partial domain adaptation (PDA), where the target label space is subsumed to the source label space. In the PDA scenario, the source outliers that are absent in the target domain may be wrongly matched to the target domain (technically named negative transfer), leading to performance degradation of UDA methods. This paper proposes a novel Target Domain Specific Classifier Learning-based Domain Adaptation (TSCDA) method. TSCDA presents a soft-weighed maximum mean discrepancy criterion to partially align feature distributions and alleviate negative transfer. Also, it learns a target-specific classifier for the target domain with pseudo-labels and multiple auxiliary classifiers, to further address classifier shift. A module named Peers Assisted Learning is used to minimize the prediction difference between multiple target-specific classifiers, which makes the classifiers more discriminant for the target domain. Extensive experiments conducted on three PDA benchmark datasets show that TSCDA outperforms other state-of-the-art methods with a large margin, e.g. \(4\%\) and \(5.6\%\) averagely on Office-31 and Office-Home, respectively.</description><identifier>EISSN: 2331-8422</identifier><identifier>DOI: 10.48550/arxiv.2008.10785</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Adaptation ; Classifiers ; Computer Science - Computer Vision and Pattern Recognition ; Learning ; Outliers (statistics) ; Performance degradation</subject><ispartof>arXiv.org, 2020-08</ispartof><rights>2020. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,782,786,887,27934</link.rule.ids><backlink>$$Uhttps://doi.org/10.48550/arXiv.2008.10785$$DView paper in arXiv$$Hfree_for_read</backlink><backlink>$$Uhttps://doi.org/10.1109/TNNLS.2020.2995648$$DView published paper (Access to full text may be restricted)$$Hfree_for_read</backlink></links><search><creatorcontrib>Chuan-Xian Ren</creatorcontrib><creatorcontrib>Ge, Pengfei</creatorcontrib><creatorcontrib>Yang, Peiyi</creatorcontrib><creatorcontrib>Shuicheng Yan</creatorcontrib><title>Learning Target Domain Specific Classifier for Partial Domain Adaptation</title><title>arXiv.org</title><description>Unsupervised domain adaptation~(UDA) aims at reducing the distribution discrepancy when transferring knowledge from a labeled source domain to an unlabeled target domain. Previous UDA methods assume that the source and target domains share an identical label space, which is unrealistic in practice since the label information of the target domain is agnostic. This paper focuses on a more realistic UDA scenario, i.e. partial domain adaptation (PDA), where the target label space is subsumed to the source label space. In the PDA scenario, the source outliers that are absent in the target domain may be wrongly matched to the target domain (technically named negative transfer), leading to performance degradation of UDA methods. This paper proposes a novel Target Domain Specific Classifier Learning-based Domain Adaptation (TSCDA) method. TSCDA presents a soft-weighed maximum mean discrepancy criterion to partially align feature distributions and alleviate negative transfer. Also, it learns a target-specific classifier for the target domain with pseudo-labels and multiple auxiliary classifiers, to further address classifier shift. A module named Peers Assisted Learning is used to minimize the prediction difference between multiple target-specific classifiers, which makes the classifiers more discriminant for the target domain. Extensive experiments conducted on three PDA benchmark datasets show that TSCDA outperforms other state-of-the-art methods with a large margin, e.g. \(4\%\) and \(5.6\%\) averagely on Office-31 and Office-Home, respectively.</description><subject>Adaptation</subject><subject>Classifiers</subject><subject>Computer Science - Computer Vision and Pattern Recognition</subject><subject>Learning</subject><subject>Outliers (statistics)</subject><subject>Performance degradation</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GOX</sourceid><recordid>eNo1kE1Lw0AQhhdBsNT-AE8ueE7cnc1utscSPyoUFMw9TPejbEmTuJuK_ntjq3N55_AwvM8QcsNZXmgp2T3Gr_CZA2M656zU8oLMQAie6QLgiixS2jPGQJUgpZiR9cZh7EK3ozXGnRvpQ3_A0NH3wZngg6FViylNm4vU95G-YRwDtv_YyuIw4hj67ppcemyTW_zlnNRPj3W1zjavzy_VapOhBJ55rpxBrj2AsxrsNFIALMFovi2U2YKRXgllGCpuDTd8CdJbrY1EUNaKObk9nz1pNkMMB4zfza9uc9KdiLszMcT-4-jS2Oz7Y-ymTg0UogTNp2-IHw5TV-M</recordid><startdate>20200825</startdate><enddate>20200825</enddate><creator>Chuan-Xian Ren</creator><creator>Ge, Pengfei</creator><creator>Yang, Peiyi</creator><creator>Shuicheng Yan</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20200825</creationdate><title>Learning Target Domain Specific Classifier for Partial Domain Adaptation</title><author>Chuan-Xian Ren ; Ge, Pengfei ; Yang, Peiyi ; Shuicheng Yan</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a521-f16eca18f22ed82dddd532292c81b46cb2c5f636c0a61dc1c1925fd88c5a26dd3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Adaptation</topic><topic>Classifiers</topic><topic>Computer Science - Computer Vision and Pattern Recognition</topic><topic>Learning</topic><topic>Outliers (statistics)</topic><topic>Performance degradation</topic><toplevel>online_resources</toplevel><creatorcontrib>Chuan-Xian Ren</creatorcontrib><creatorcontrib>Ge, Pengfei</creatorcontrib><creatorcontrib>Yang, Peiyi</creatorcontrib><creatorcontrib>Shuicheng Yan</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Access via ProQuest (Open Access)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection><collection>arXiv Computer Science</collection><collection>arXiv.org</collection><jtitle>arXiv.org</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Chuan-Xian Ren</au><au>Ge, Pengfei</au><au>Yang, Peiyi</au><au>Shuicheng Yan</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Learning Target Domain Specific Classifier for Partial Domain Adaptation</atitle><jtitle>arXiv.org</jtitle><date>2020-08-25</date><risdate>2020</risdate><eissn>2331-8422</eissn><abstract>Unsupervised domain adaptation~(UDA) aims at reducing the distribution discrepancy when transferring knowledge from a labeled source domain to an unlabeled target domain. Previous UDA methods assume that the source and target domains share an identical label space, which is unrealistic in practice since the label information of the target domain is agnostic. This paper focuses on a more realistic UDA scenario, i.e. partial domain adaptation (PDA), where the target label space is subsumed to the source label space. In the PDA scenario, the source outliers that are absent in the target domain may be wrongly matched to the target domain (technically named negative transfer), leading to performance degradation of UDA methods. This paper proposes a novel Target Domain Specific Classifier Learning-based Domain Adaptation (TSCDA) method. TSCDA presents a soft-weighed maximum mean discrepancy criterion to partially align feature distributions and alleviate negative transfer. Also, it learns a target-specific classifier for the target domain with pseudo-labels and multiple auxiliary classifiers, to further address classifier shift. A module named Peers Assisted Learning is used to minimize the prediction difference between multiple target-specific classifiers, which makes the classifiers more discriminant for the target domain. Extensive experiments conducted on three PDA benchmark datasets show that TSCDA outperforms other state-of-the-art methods with a large margin, e.g. \(4\%\) and \(5.6\%\) averagely on Office-31 and Office-Home, respectively.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><doi>10.48550/arxiv.2008.10785</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2020-08 |
issn | 2331-8422 |
language | eng |
recordid | cdi_arxiv_primary_2008_10785 |
source | Freely Accessible Journals; arXiv.org |
subjects | Adaptation Classifiers Computer Science - Computer Vision and Pattern Recognition Learning Outliers (statistics) Performance degradation |
title | Learning Target Domain Specific Classifier for Partial Domain Adaptation |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-02T01%3A32%3A20IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_arxiv&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Learning%20Target%20Domain%20Specific%20Classifier%20for%20Partial%20Domain%20Adaptation&rft.jtitle=arXiv.org&rft.au=Chuan-Xian%20Ren&rft.date=2020-08-25&rft.eissn=2331-8422&rft_id=info:doi/10.48550/arxiv.2008.10785&rft_dat=%3Cproquest_arxiv%3E2437281331%3C/proquest_arxiv%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2437281331&rft_id=info:pmid/&rfr_iscdi=true |