Domain Adversarial Reinforcement Learning for Partial Domain Adaptation
Partial domain adaptation aims to transfer knowledge from a label-rich source domain to a label-scarce target domain (i.e., the target categories are a subset of the source ones), which relaxes the common assumption in traditional domain adaptation that the label space is fully shared across differe...
Gespeichert in:
Veröffentlicht in: | IEEE transaction on neural networks and learning systems 2022-02, Vol.33 (2), p.539-553 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 553 |
---|---|
container_issue | 2 |
container_start_page | 539 |
container_title | IEEE transaction on neural networks and learning systems |
container_volume | 33 |
creator | Chen, Jin Wu, Xinxiao Duan, Lixin Gao, Shenghua |
description | Partial domain adaptation aims to transfer knowledge from a label-rich source domain to a label-scarce target domain (i.e., the target categories are a subset of the source ones), which relaxes the common assumption in traditional domain adaptation that the label space is fully shared across different domains. In this more general and practical scenario on partial domain adaptation, a major challenge is how to select source instances from the shared categories to ensure positive transfer for the target domain. To address this problem, we propose a domain adversarial reinforcement learning (DARL) framework to progressively select source instances to learn transferable features between domains by reducing the domain shift. Specifically, we employ a deep Q-learning to learn policies for an agent to make selection decisions by approximating the action-value function. Moreover, domain adversarial learning is introduced to learn a common feature subspace for the selected source instances and the target instances, and also to contribute to the reward calculation for the agent that is based on the relevance of the selected source instances with respect to the target domain. Extensive experiments on several benchmark data sets clearly demonstrate the superior performance of our proposed DARL over existing state-of-the-art methods for partial domain adaptation. |
doi_str_mv | 10.1109/TNNLS.2020.3028078 |
format | Article |
fullrecord | <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_ieee_primary_9228896</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9228896</ieee_id><sourcerecordid>2451857013</sourcerecordid><originalsourceid>FETCH-LOGICAL-c395t-99c3b4220932b1c907b5047051019e78380780572017ea474c9a1235f6b779153</originalsourceid><addsrcrecordid>eNpdkF1LwzAUhoMobsz9AQUpeOPN5slJ83U5pk5hTNEJ3oW0S6VjbWfSCv57Ozd7YW4STp73cM5DyDmFMaWgb5aLxfx1jIAwZoAKpDoifaQCR8iUOu7e8r1HhiGsoT0CuIj1KekxBiIWXPfJ7LYqbF5Gk9WX88H63G6iF5eXWeVTV7iyjubO-jIvP6K2FD1bX--QLmW3ta3zqjwjJ5ndBDc83APydn-3nD6M5k-zx-lkPkqZ5vVI65QlMSJohglNNciEQyyBU6DaScV2iwCXCFQ6G8s41ZYi45lIpNSUswG53vfd-uqzcaE2RR5St9nY0lVNMBhzqrgEylr06h-6rhpfttMZFMiZ0FxhS-GeSn0VgneZ2fq8sP7bUDA70-bXtNmZNgfTbejy0LpJCrfqIn9eW-BiD-TOue5bIyqlBfsBoKN-og</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2625369582</pqid></control><display><type>article</type><title>Domain Adversarial Reinforcement Learning for Partial Domain Adaptation</title><source>IEEE Electronic Library (IEL)</source><creator>Chen, Jin ; Wu, Xinxiao ; Duan, Lixin ; Gao, Shenghua</creator><creatorcontrib>Chen, Jin ; Wu, Xinxiao ; Duan, Lixin ; Gao, Shenghua</creatorcontrib><description>Partial domain adaptation aims to transfer knowledge from a label-rich source domain to a label-scarce target domain (i.e., the target categories are a subset of the source ones), which relaxes the common assumption in traditional domain adaptation that the label space is fully shared across different domains. In this more general and practical scenario on partial domain adaptation, a major challenge is how to select source instances from the shared categories to ensure positive transfer for the target domain. To address this problem, we propose a domain adversarial reinforcement learning (DARL) framework to progressively select source instances to learn transferable features between domains by reducing the domain shift. Specifically, we employ a deep Q-learning to learn policies for an agent to make selection decisions by approximating the action-value function. Moreover, domain adversarial learning is introduced to learn a common feature subspace for the selected source instances and the target instances, and also to contribute to the reward calculation for the agent that is based on the relevance of the selected source instances with respect to the target domain. Extensive experiments on several benchmark data sets clearly demonstrate the superior performance of our proposed DARL over existing state-of-the-art methods for partial domain adaptation.</description><identifier>ISSN: 2162-237X</identifier><identifier>EISSN: 2162-2388</identifier><identifier>DOI: 10.1109/TNNLS.2020.3028078</identifier><identifier>PMID: 33064659</identifier><identifier>CODEN: ITNNAL</identifier><language>eng</language><publisher>United States: IEEE</publisher><subject>Adaptation ; Adversarial learning ; Benchmark testing ; Computer science ; Domains ; Feature extraction ; Games ; Generators ; Knowledge management ; Learning ; Learning systems ; Machine learning ; partial domain adaptation ; Reinforcement ; Reinforcement learning</subject><ispartof>IEEE transaction on neural networks and learning systems, 2022-02, Vol.33 (2), p.539-553</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c395t-99c3b4220932b1c907b5047051019e78380780572017ea474c9a1235f6b779153</citedby><cites>FETCH-LOGICAL-c395t-99c3b4220932b1c907b5047051019e78380780572017ea474c9a1235f6b779153</cites><orcidid>0000-0003-1626-2040 ; 0000-0002-2056-6947 ; 0000-0003-0788-0445</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9228896$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,796,27924,27925,54758</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/9228896$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/33064659$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Chen, Jin</creatorcontrib><creatorcontrib>Wu, Xinxiao</creatorcontrib><creatorcontrib>Duan, Lixin</creatorcontrib><creatorcontrib>Gao, Shenghua</creatorcontrib><title>Domain Adversarial Reinforcement Learning for Partial Domain Adaptation</title><title>IEEE transaction on neural networks and learning systems</title><addtitle>TNNLS</addtitle><addtitle>IEEE Trans Neural Netw Learn Syst</addtitle><description>Partial domain adaptation aims to transfer knowledge from a label-rich source domain to a label-scarce target domain (i.e., the target categories are a subset of the source ones), which relaxes the common assumption in traditional domain adaptation that the label space is fully shared across different domains. In this more general and practical scenario on partial domain adaptation, a major challenge is how to select source instances from the shared categories to ensure positive transfer for the target domain. To address this problem, we propose a domain adversarial reinforcement learning (DARL) framework to progressively select source instances to learn transferable features between domains by reducing the domain shift. Specifically, we employ a deep Q-learning to learn policies for an agent to make selection decisions by approximating the action-value function. Moreover, domain adversarial learning is introduced to learn a common feature subspace for the selected source instances and the target instances, and also to contribute to the reward calculation for the agent that is based on the relevance of the selected source instances with respect to the target domain. Extensive experiments on several benchmark data sets clearly demonstrate the superior performance of our proposed DARL over existing state-of-the-art methods for partial domain adaptation.</description><subject>Adaptation</subject><subject>Adversarial learning</subject><subject>Benchmark testing</subject><subject>Computer science</subject><subject>Domains</subject><subject>Feature extraction</subject><subject>Games</subject><subject>Generators</subject><subject>Knowledge management</subject><subject>Learning</subject><subject>Learning systems</subject><subject>Machine learning</subject><subject>partial domain adaptation</subject><subject>Reinforcement</subject><subject>Reinforcement learning</subject><issn>2162-237X</issn><issn>2162-2388</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNpdkF1LwzAUhoMobsz9AQUpeOPN5slJ83U5pk5hTNEJ3oW0S6VjbWfSCv57Ozd7YW4STp73cM5DyDmFMaWgb5aLxfx1jIAwZoAKpDoifaQCR8iUOu7e8r1HhiGsoT0CuIj1KekxBiIWXPfJ7LYqbF5Gk9WX88H63G6iF5eXWeVTV7iyjubO-jIvP6K2FD1bX--QLmW3ta3zqjwjJ5ndBDc83APydn-3nD6M5k-zx-lkPkqZ5vVI65QlMSJohglNNciEQyyBU6DaScV2iwCXCFQ6G8s41ZYi45lIpNSUswG53vfd-uqzcaE2RR5St9nY0lVNMBhzqrgEylr06h-6rhpfttMZFMiZ0FxhS-GeSn0VgneZ2fq8sP7bUDA70-bXtNmZNgfTbejy0LpJCrfqIn9eW-BiD-TOue5bIyqlBfsBoKN-og</recordid><startdate>20220201</startdate><enddate>20220201</enddate><creator>Chen, Jin</creator><creator>Wu, Xinxiao</creator><creator>Duan, Lixin</creator><creator>Gao, Shenghua</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7QF</scope><scope>7QO</scope><scope>7QP</scope><scope>7QQ</scope><scope>7QR</scope><scope>7SC</scope><scope>7SE</scope><scope>7SP</scope><scope>7SR</scope><scope>7TA</scope><scope>7TB</scope><scope>7TK</scope><scope>7U5</scope><scope>8BQ</scope><scope>8FD</scope><scope>F28</scope><scope>FR3</scope><scope>H8D</scope><scope>JG9</scope><scope>JQ2</scope><scope>KR7</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>P64</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0003-1626-2040</orcidid><orcidid>https://orcid.org/0000-0002-2056-6947</orcidid><orcidid>https://orcid.org/0000-0003-0788-0445</orcidid></search><sort><creationdate>20220201</creationdate><title>Domain Adversarial Reinforcement Learning for Partial Domain Adaptation</title><author>Chen, Jin ; Wu, Xinxiao ; Duan, Lixin ; Gao, Shenghua</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c395t-99c3b4220932b1c907b5047051019e78380780572017ea474c9a1235f6b779153</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Adaptation</topic><topic>Adversarial learning</topic><topic>Benchmark testing</topic><topic>Computer science</topic><topic>Domains</topic><topic>Feature extraction</topic><topic>Games</topic><topic>Generators</topic><topic>Knowledge management</topic><topic>Learning</topic><topic>Learning systems</topic><topic>Machine learning</topic><topic>partial domain adaptation</topic><topic>Reinforcement</topic><topic>Reinforcement learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Chen, Jin</creatorcontrib><creatorcontrib>Wu, Xinxiao</creatorcontrib><creatorcontrib>Duan, Lixin</creatorcontrib><creatorcontrib>Gao, Shenghua</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>Aluminium Industry Abstracts</collection><collection>Biotechnology Research Abstracts</collection><collection>Calcium & Calcified Tissue Abstracts</collection><collection>Ceramic Abstracts</collection><collection>Chemoreception Abstracts</collection><collection>Computer and Information Systems Abstracts</collection><collection>Corrosion Abstracts</collection><collection>Electronics & Communications Abstracts</collection><collection>Engineered Materials Abstracts</collection><collection>Materials Business File</collection><collection>Mechanical & Transportation Engineering Abstracts</collection><collection>Neurosciences Abstracts</collection><collection>Solid State and Superconductivity Abstracts</collection><collection>METADEX</collection><collection>Technology Research Database</collection><collection>ANTE: Abstracts in New Technology & Engineering</collection><collection>Engineering Research Database</collection><collection>Aerospace Database</collection><collection>Materials Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Civil Engineering Abstracts</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>Biotechnology and BioEngineering Abstracts</collection><collection>MEDLINE - Academic</collection><jtitle>IEEE transaction on neural networks and learning systems</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Chen, Jin</au><au>Wu, Xinxiao</au><au>Duan, Lixin</au><au>Gao, Shenghua</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Domain Adversarial Reinforcement Learning for Partial Domain Adaptation</atitle><jtitle>IEEE transaction on neural networks and learning systems</jtitle><stitle>TNNLS</stitle><addtitle>IEEE Trans Neural Netw Learn Syst</addtitle><date>2022-02-01</date><risdate>2022</risdate><volume>33</volume><issue>2</issue><spage>539</spage><epage>553</epage><pages>539-553</pages><issn>2162-237X</issn><eissn>2162-2388</eissn><coden>ITNNAL</coden><abstract>Partial domain adaptation aims to transfer knowledge from a label-rich source domain to a label-scarce target domain (i.e., the target categories are a subset of the source ones), which relaxes the common assumption in traditional domain adaptation that the label space is fully shared across different domains. In this more general and practical scenario on partial domain adaptation, a major challenge is how to select source instances from the shared categories to ensure positive transfer for the target domain. To address this problem, we propose a domain adversarial reinforcement learning (DARL) framework to progressively select source instances to learn transferable features between domains by reducing the domain shift. Specifically, we employ a deep Q-learning to learn policies for an agent to make selection decisions by approximating the action-value function. Moreover, domain adversarial learning is introduced to learn a common feature subspace for the selected source instances and the target instances, and also to contribute to the reward calculation for the agent that is based on the relevance of the selected source instances with respect to the target domain. Extensive experiments on several benchmark data sets clearly demonstrate the superior performance of our proposed DARL over existing state-of-the-art methods for partial domain adaptation.</abstract><cop>United States</cop><pub>IEEE</pub><pmid>33064659</pmid><doi>10.1109/TNNLS.2020.3028078</doi><tpages>15</tpages><orcidid>https://orcid.org/0000-0003-1626-2040</orcidid><orcidid>https://orcid.org/0000-0002-2056-6947</orcidid><orcidid>https://orcid.org/0000-0003-0788-0445</orcidid><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | ISSN: 2162-237X |
ispartof | IEEE transaction on neural networks and learning systems, 2022-02, Vol.33 (2), p.539-553 |
issn | 2162-237X 2162-2388 |
language | eng |
recordid | cdi_ieee_primary_9228896 |
source | IEEE Electronic Library (IEL) |
subjects | Adaptation Adversarial learning Benchmark testing Computer science Domains Feature extraction Games Generators Knowledge management Learning Learning systems Machine learning partial domain adaptation Reinforcement Reinforcement learning |
title | Domain Adversarial Reinforcement Learning for Partial Domain Adaptation |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-20T02%3A42%3A16IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Domain%20Adversarial%20Reinforcement%20Learning%20for%20Partial%20Domain%20Adaptation&rft.jtitle=IEEE%20transaction%20on%20neural%20networks%20and%20learning%20systems&rft.au=Chen,%20Jin&rft.date=2022-02-01&rft.volume=33&rft.issue=2&rft.spage=539&rft.epage=553&rft.pages=539-553&rft.issn=2162-237X&rft.eissn=2162-2388&rft.coden=ITNNAL&rft_id=info:doi/10.1109/TNNLS.2020.3028078&rft_dat=%3Cproquest_RIE%3E2451857013%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2625369582&rft_id=info:pmid/33064659&rft_ieee_id=9228896&rfr_iscdi=true |