Divergence-Agnostic Unsupervised Domain Adaptation by Adversarial Attacks
Conventional machine learning algorithms suffer the problem that the model trained on existing data fails to generalize well to the data sampled from other distributions. To tackle this issue, unsupervised domain adaptation (UDA) transfers the knowledge learned from a well-labeled source domain to a...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on pattern analysis and machine intelligence 2022-11, Vol.44 (11), p.8196-8211 |
---|---|
Hauptverfasser: | , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 8211 |
---|---|
container_issue | 11 |
container_start_page | 8196 |
container_title | IEEE transactions on pattern analysis and machine intelligence |
container_volume | 44 |
creator | Li, Jingjing Du, Zhekai Zhu, Lei Ding, Zhengming Lu, Ke Shen, Heng Tao |
description | Conventional machine learning algorithms suffer the problem that the model trained on existing data fails to generalize well to the data sampled from other distributions. To tackle this issue, unsupervised domain adaptation (UDA) transfers the knowledge learned from a well-labeled source domain to a different but related target domain where labeled data is unavailable. The majority of existing UDA methods assume that data from the source domain and the target domain are available and complete during training. Thus, the divergence between the two domains can be formulated and minimized. In this paper, we consider a more practical yet challenging UDA setting where either the source domain data or the target domain data are unknown. Conventional UDA methods would fail this setting since the domain divergence is agnostic due to the absence of the source data or the target data. Technically, we investigate UDA from a novel view-adversarial attack-and tackle the divergence-agnostic adaptive learning problem in a unified framework. Specifically, we first report the motivation of our approach by investigating the inherent relationship between UDA and adversarial attacks. Then we elaborately design adversarial examples to attack the training model and harness these adversarial examples. We argue that the generalization ability of the model would be significantly improved if it can defend against our attack, so as to improve the performance on the target domain. Theoretically, we analyze the generalization bound for our method based on domain adaptation theories. Extensive experimental results on multiple UDA benchmarks under conventional, source-absent and target-absent UDA settings verify that our method is able to achieve a favorable performance compared with previous ones. Notably, this work extends the scope of both domain adaptation and adversarial attack, and expected to inspire more ideas in the community. |
doi_str_mv | 10.1109/TPAMI.2021.3109287 |
format | Article |
fullrecord | <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_ieee_primary_9528987</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9528987</ieee_id><sourcerecordid>2569380352</sourcerecordid><originalsourceid>FETCH-LOGICAL-c328t-e49530956a78d979ad6ce82bcb7485b844f7b85cabeae6a842736851a53deee13</originalsourceid><addsrcrecordid>eNpdkEtPAjEUhRujEUT_gG4mceNmsM9pu5yIDxKMLmDddDoXUoQZbAcS_r1FiAtXNyf5vpObg9AtwUNCsH6cfpbv4yHFlAxZylTJM9QnmumcCabPUR-TguZKUdVDVzEuMSZcYHaJeoxzqVhB-2g88jsIC2gc5OWiaWPnXTZr4nYDYecj1NmoXVvfZGVtN53tfNtk1T6lZEUbvF1lZddZ9xWv0cXcriLcnO4AzV6ep09v-eTjdfxUTnLHqOpy4FowrEVhpaq11LYuHChauUpyJSrF-VxWSjhbgYXCKk4lK5QgVrAaAAgboIdj7ya031uInVn76GC1sg2022ioKDRTmAma0Pt_6LLdhiZ9Z6ikhFOtpE4UPVIutDEGmJtN8Gsb9oZgcxja_A5tDkOb09BJujtKPj31J2hBVepkPxRkd5o</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2721429879</pqid></control><display><type>article</type><title>Divergence-Agnostic Unsupervised Domain Adaptation by Adversarial Attacks</title><source>IEEE Electronic Library (IEL)</source><creator>Li, Jingjing ; Du, Zhekai ; Zhu, Lei ; Ding, Zhengming ; Lu, Ke ; Shen, Heng Tao</creator><creatorcontrib>Li, Jingjing ; Du, Zhekai ; Zhu, Lei ; Ding, Zhengming ; Lu, Ke ; Shen, Heng Tao</creatorcontrib><description>Conventional machine learning algorithms suffer the problem that the model trained on existing data fails to generalize well to the data sampled from other distributions. To tackle this issue, unsupervised domain adaptation (UDA) transfers the knowledge learned from a well-labeled source domain to a different but related target domain where labeled data is unavailable. The majority of existing UDA methods assume that data from the source domain and the target domain are available and complete during training. Thus, the divergence between the two domains can be formulated and minimized. In this paper, we consider a more practical yet challenging UDA setting where either the source domain data or the target domain data are unknown. Conventional UDA methods would fail this setting since the domain divergence is agnostic due to the absence of the source data or the target data. Technically, we investigate UDA from a novel view-adversarial attack-and tackle the divergence-agnostic adaptive learning problem in a unified framework. Specifically, we first report the motivation of our approach by investigating the inherent relationship between UDA and adversarial attacks. Then we elaborately design adversarial examples to attack the training model and harness these adversarial examples. We argue that the generalization ability of the model would be significantly improved if it can defend against our attack, so as to improve the performance on the target domain. Theoretically, we analyze the generalization bound for our method based on domain adaptation theories. Extensive experimental results on multiple UDA benchmarks under conventional, source-absent and target-absent UDA settings verify that our method is able to achieve a favorable performance compared with previous ones. Notably, this work extends the scope of both domain adaptation and adversarial attack, and expected to inspire more ideas in the community.</description><identifier>ISSN: 0162-8828</identifier><identifier>EISSN: 1939-3539</identifier><identifier>EISSN: 2160-9292</identifier><identifier>DOI: 10.1109/TPAMI.2021.3109287</identifier><identifier>PMID: 34478362</identifier><identifier>CODEN: ITPIDJ</identifier><language>eng</language><publisher>New York: IEEE</publisher><subject>Adaptation ; Adaptation models ; adversarial attacks ; Algorithms ; Data models ; domain generalization ; Domains ; Feature extraction ; Knowledge management ; Machine learning ; Measurement ; model adaptation ; Neural networks ; Performance enhancement ; Semantics ; Training ; transfer learning ; Unsupervised domain adaptation</subject><ispartof>IEEE transactions on pattern analysis and machine intelligence, 2022-11, Vol.44 (11), p.8196-8211</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c328t-e49530956a78d979ad6ce82bcb7485b844f7b85cabeae6a842736851a53deee13</citedby><cites>FETCH-LOGICAL-c328t-e49530956a78d979ad6ce82bcb7485b844f7b85cabeae6a842736851a53deee13</cites><orcidid>0000-0002-3456-4993 ; 0000-0002-5504-2529 ; 0000-0002-2993-7142 ; 0000-0002-2999-2088 ; 0000-0002-9406-3920</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9528987$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,776,780,792,27901,27902,54733</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/9528987$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Li, Jingjing</creatorcontrib><creatorcontrib>Du, Zhekai</creatorcontrib><creatorcontrib>Zhu, Lei</creatorcontrib><creatorcontrib>Ding, Zhengming</creatorcontrib><creatorcontrib>Lu, Ke</creatorcontrib><creatorcontrib>Shen, Heng Tao</creatorcontrib><title>Divergence-Agnostic Unsupervised Domain Adaptation by Adversarial Attacks</title><title>IEEE transactions on pattern analysis and machine intelligence</title><addtitle>TPAMI</addtitle><description>Conventional machine learning algorithms suffer the problem that the model trained on existing data fails to generalize well to the data sampled from other distributions. To tackle this issue, unsupervised domain adaptation (UDA) transfers the knowledge learned from a well-labeled source domain to a different but related target domain where labeled data is unavailable. The majority of existing UDA methods assume that data from the source domain and the target domain are available and complete during training. Thus, the divergence between the two domains can be formulated and minimized. In this paper, we consider a more practical yet challenging UDA setting where either the source domain data or the target domain data are unknown. Conventional UDA methods would fail this setting since the domain divergence is agnostic due to the absence of the source data or the target data. Technically, we investigate UDA from a novel view-adversarial attack-and tackle the divergence-agnostic adaptive learning problem in a unified framework. Specifically, we first report the motivation of our approach by investigating the inherent relationship between UDA and adversarial attacks. Then we elaborately design adversarial examples to attack the training model and harness these adversarial examples. We argue that the generalization ability of the model would be significantly improved if it can defend against our attack, so as to improve the performance on the target domain. Theoretically, we analyze the generalization bound for our method based on domain adaptation theories. Extensive experimental results on multiple UDA benchmarks under conventional, source-absent and target-absent UDA settings verify that our method is able to achieve a favorable performance compared with previous ones. Notably, this work extends the scope of both domain adaptation and adversarial attack, and expected to inspire more ideas in the community.</description><subject>Adaptation</subject><subject>Adaptation models</subject><subject>adversarial attacks</subject><subject>Algorithms</subject><subject>Data models</subject><subject>domain generalization</subject><subject>Domains</subject><subject>Feature extraction</subject><subject>Knowledge management</subject><subject>Machine learning</subject><subject>Measurement</subject><subject>model adaptation</subject><subject>Neural networks</subject><subject>Performance enhancement</subject><subject>Semantics</subject><subject>Training</subject><subject>transfer learning</subject><subject>Unsupervised domain adaptation</subject><issn>0162-8828</issn><issn>1939-3539</issn><issn>2160-9292</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNpdkEtPAjEUhRujEUT_gG4mceNmsM9pu5yIDxKMLmDddDoXUoQZbAcS_r1FiAtXNyf5vpObg9AtwUNCsH6cfpbv4yHFlAxZylTJM9QnmumcCabPUR-TguZKUdVDVzEuMSZcYHaJeoxzqVhB-2g88jsIC2gc5OWiaWPnXTZr4nYDYecj1NmoXVvfZGVtN53tfNtk1T6lZEUbvF1lZddZ9xWv0cXcriLcnO4AzV6ep09v-eTjdfxUTnLHqOpy4FowrEVhpaq11LYuHChauUpyJSrF-VxWSjhbgYXCKk4lK5QgVrAaAAgboIdj7ya031uInVn76GC1sg2022ioKDRTmAma0Pt_6LLdhiZ9Z6ikhFOtpE4UPVIutDEGmJtN8Gsb9oZgcxja_A5tDkOb09BJujtKPj31J2hBVepkPxRkd5o</recordid><startdate>20221101</startdate><enddate>20221101</enddate><creator>Li, Jingjing</creator><creator>Du, Zhekai</creator><creator>Zhu, Lei</creator><creator>Ding, Zhengming</creator><creator>Lu, Ke</creator><creator>Shen, Heng Tao</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0002-3456-4993</orcidid><orcidid>https://orcid.org/0000-0002-5504-2529</orcidid><orcidid>https://orcid.org/0000-0002-2993-7142</orcidid><orcidid>https://orcid.org/0000-0002-2999-2088</orcidid><orcidid>https://orcid.org/0000-0002-9406-3920</orcidid></search><sort><creationdate>20221101</creationdate><title>Divergence-Agnostic Unsupervised Domain Adaptation by Adversarial Attacks</title><author>Li, Jingjing ; Du, Zhekai ; Zhu, Lei ; Ding, Zhengming ; Lu, Ke ; Shen, Heng Tao</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c328t-e49530956a78d979ad6ce82bcb7485b844f7b85cabeae6a842736851a53deee13</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Adaptation</topic><topic>Adaptation models</topic><topic>adversarial attacks</topic><topic>Algorithms</topic><topic>Data models</topic><topic>domain generalization</topic><topic>Domains</topic><topic>Feature extraction</topic><topic>Knowledge management</topic><topic>Machine learning</topic><topic>Measurement</topic><topic>model adaptation</topic><topic>Neural networks</topic><topic>Performance enhancement</topic><topic>Semantics</topic><topic>Training</topic><topic>transfer learning</topic><topic>Unsupervised domain adaptation</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Li, Jingjing</creatorcontrib><creatorcontrib>Du, Zhekai</creatorcontrib><creatorcontrib>Zhu, Lei</creatorcontrib><creatorcontrib>Ding, Zhengming</creatorcontrib><creatorcontrib>Lu, Ke</creatorcontrib><creatorcontrib>Shen, Heng Tao</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics & Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>MEDLINE - Academic</collection><jtitle>IEEE transactions on pattern analysis and machine intelligence</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Li, Jingjing</au><au>Du, Zhekai</au><au>Zhu, Lei</au><au>Ding, Zhengming</au><au>Lu, Ke</au><au>Shen, Heng Tao</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Divergence-Agnostic Unsupervised Domain Adaptation by Adversarial Attacks</atitle><jtitle>IEEE transactions on pattern analysis and machine intelligence</jtitle><stitle>TPAMI</stitle><date>2022-11-01</date><risdate>2022</risdate><volume>44</volume><issue>11</issue><spage>8196</spage><epage>8211</epage><pages>8196-8211</pages><issn>0162-8828</issn><eissn>1939-3539</eissn><eissn>2160-9292</eissn><coden>ITPIDJ</coden><abstract>Conventional machine learning algorithms suffer the problem that the model trained on existing data fails to generalize well to the data sampled from other distributions. To tackle this issue, unsupervised domain adaptation (UDA) transfers the knowledge learned from a well-labeled source domain to a different but related target domain where labeled data is unavailable. The majority of existing UDA methods assume that data from the source domain and the target domain are available and complete during training. Thus, the divergence between the two domains can be formulated and minimized. In this paper, we consider a more practical yet challenging UDA setting where either the source domain data or the target domain data are unknown. Conventional UDA methods would fail this setting since the domain divergence is agnostic due to the absence of the source data or the target data. Technically, we investigate UDA from a novel view-adversarial attack-and tackle the divergence-agnostic adaptive learning problem in a unified framework. Specifically, we first report the motivation of our approach by investigating the inherent relationship between UDA and adversarial attacks. Then we elaborately design adversarial examples to attack the training model and harness these adversarial examples. We argue that the generalization ability of the model would be significantly improved if it can defend against our attack, so as to improve the performance on the target domain. Theoretically, we analyze the generalization bound for our method based on domain adaptation theories. Extensive experimental results on multiple UDA benchmarks under conventional, source-absent and target-absent UDA settings verify that our method is able to achieve a favorable performance compared with previous ones. Notably, this work extends the scope of both domain adaptation and adversarial attack, and expected to inspire more ideas in the community.</abstract><cop>New York</cop><pub>IEEE</pub><pmid>34478362</pmid><doi>10.1109/TPAMI.2021.3109287</doi><tpages>16</tpages><orcidid>https://orcid.org/0000-0002-3456-4993</orcidid><orcidid>https://orcid.org/0000-0002-5504-2529</orcidid><orcidid>https://orcid.org/0000-0002-2993-7142</orcidid><orcidid>https://orcid.org/0000-0002-2999-2088</orcidid><orcidid>https://orcid.org/0000-0002-9406-3920</orcidid></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | ISSN: 0162-8828 |
ispartof | IEEE transactions on pattern analysis and machine intelligence, 2022-11, Vol.44 (11), p.8196-8211 |
issn | 0162-8828 1939-3539 2160-9292 |
language | eng |
recordid | cdi_ieee_primary_9528987 |
source | IEEE Electronic Library (IEL) |
subjects | Adaptation Adaptation models adversarial attacks Algorithms Data models domain generalization Domains Feature extraction Knowledge management Machine learning Measurement model adaptation Neural networks Performance enhancement Semantics Training transfer learning Unsupervised domain adaptation |
title | Divergence-Agnostic Unsupervised Domain Adaptation by Adversarial Attacks |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-15T11%3A26%3A08IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Divergence-Agnostic%20Unsupervised%20Domain%20Adaptation%20by%20Adversarial%20Attacks&rft.jtitle=IEEE%20transactions%20on%20pattern%20analysis%20and%20machine%20intelligence&rft.au=Li,%20Jingjing&rft.date=2022-11-01&rft.volume=44&rft.issue=11&rft.spage=8196&rft.epage=8211&rft.pages=8196-8211&rft.issn=0162-8828&rft.eissn=1939-3539&rft.coden=ITPIDJ&rft_id=info:doi/10.1109/TPAMI.2021.3109287&rft_dat=%3Cproquest_RIE%3E2569380352%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2721429879&rft_id=info:pmid/34478362&rft_ieee_id=9528987&rfr_iscdi=true |