Towards More General Loss and Setting in Unsupervised Domain Adaptation

In this article, we present an analysis of unsupervised domain adaptation with a series of theoretical and algorithmic results. We derive a novel Rényi-\alpha α divergence-based generalization bound, which is tailored to domain adaptation algorithms with arbitrary loss functions in a stochastic...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on knowledge and data engineering 2023-10, Vol.35 (10), p.10140-10150
Hauptverfasser: Shui, Changjian, Pu, Ruizhi, Xu, Gezheng, Wen, Jun, Zhou, Fan, Gagne, Christian, Ling, Charles X., Wang, Boyu
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 10150
container_issue 10
container_start_page 10140
container_title IEEE transactions on knowledge and data engineering
container_volume 35
creator Shui, Changjian
Pu, Ruizhi
Xu, Gezheng
Wen, Jun
Zhou, Fan
Gagne, Christian
Ling, Charles X.
Wang, Boyu
description In this article, we present an analysis of unsupervised domain adaptation with a series of theoretical and algorithmic results. We derive a novel Rényi-\alpha α divergence-based generalization bound, which is tailored to domain adaptation algorithms with arbitrary loss functions in a stochastic setting. Moreover, our theoretical results provide new insights into the assumptions for successful domain adaptation: the closeness between the conditional distributions of the domains and the Lipschitzness on the source domain. With these assumptions, we reveal the following: if their conditional generation distributions are close, the Lipschitzness property of the target domain can be transferred from the Lipschitzness on the source domain, without knowing the exact target distribution. Motivated by our analysis and assumptions, we further derive practical principles for deep domain adaptation: 1) Rényi-2 adversarial training for marginal distributions matching and 2) Lipschitz regularization for the classifier. Our experimental results on both synthetic and real-world datasets support our theoretical findings and the practical efficiency of the proposed principles.
doi_str_mv 10.1109/TKDE.2023.3266785
format Article
fullrecord <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_crossref_primary_10_1109_TKDE_2023_3266785</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10102307</ieee_id><sourcerecordid>2865092038</sourcerecordid><originalsourceid>FETCH-LOGICAL-c294t-bc9788e4b951efbddc9962f340a6d53ff560fc89969eb830634c3f799c55707a3</originalsourceid><addsrcrecordid>eNpNkMFOwzAMhiMEEmPwAEgcInHucJKmSY7TNgZiiAPbOUpbB3Xa2pJ0IN6eTOPAyZb1_bb8EXLLYMIYmIf1y3wx4cDFRPCiUFqekRGTUmecGXaeeshZlotcXZKrGLcAoJVmI7Jcd98u1JG-dgHpElsMbkdXXYzUtTV9x2Fo2g_atHTTxkOP4auJWNN5t3dpNq1dP7ih6dprcuHdLuLNXx2TzeNiPXvKVm_L59l0lVXc5ENWVkZpjXlpJENf1nVlTMG9yMEVtRTeywJ8pdPQYKkFFCKvhFfGVFIqUE6Myf1pbx-6zwPGwW67Q2jTSct1IcFwEDpR7ERVIX0S0Ns-NHsXfiwDe_Rlj77s0Zf985Uyd6dMg4j_eJYgUOIX6uJl7Q</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2865092038</pqid></control><display><type>article</type><title>Towards More General Loss and Setting in Unsupervised Domain Adaptation</title><source>IEEE Electronic Library (IEL)</source><creator>Shui, Changjian ; Pu, Ruizhi ; Xu, Gezheng ; Wen, Jun ; Zhou, Fan ; Gagne, Christian ; Ling, Charles X. ; Wang, Boyu</creator><creatorcontrib>Shui, Changjian ; Pu, Ruizhi ; Xu, Gezheng ; Wen, Jun ; Zhou, Fan ; Gagne, Christian ; Ling, Charles X. ; Wang, Boyu</creatorcontrib><description><![CDATA[In this article, we present an analysis of unsupervised domain adaptation with a series of theoretical and algorithmic results. We derive a novel Rényi-<inline-formula><tex-math notation="LaTeX">\alpha</tex-math> <mml:math> <mml:mi>α</mml:mi> </mml:math> <inline-graphic xlink:href="pu-ieq1-3266785.gif"/> </inline-formula> divergence-based generalization bound, which is tailored to domain adaptation algorithms with arbitrary loss functions in a stochastic setting. Moreover, our theoretical results provide new insights into the assumptions for successful domain adaptation: the closeness between the conditional distributions of the domains and the Lipschitzness on the source domain. With these assumptions, we reveal the following: if their conditional generation distributions are close, the Lipschitzness property of the target domain can be transferred from the Lipschitzness on the source domain, without knowing the exact target distribution. Motivated by our analysis and assumptions, we further derive practical principles for deep domain adaptation: 1) Rényi-2 adversarial training for marginal distributions matching and 2) Lipschitz regularization for the classifier. Our experimental results on both synthetic and real-world datasets support our theoretical findings and the practical efficiency of the proposed principles.]]></description><identifier>ISSN: 1041-4347</identifier><identifier>EISSN: 1558-2191</identifier><identifier>DOI: 10.1109/TKDE.2023.3266785</identifier><identifier>CODEN: ITKEEH</identifier><language>eng</language><publisher>New York: IEEE</publisher><subject>Adaptation ; Algorithms ; Computer science ; Divergence ; Domain adaptation ; Labeling ; Principles ; Regularization ; representation learning ; rényi divergence ; Supervised learning ; Task analysis ; Training ; Upper bound ; Urban areas</subject><ispartof>IEEE transactions on knowledge and data engineering, 2023-10, Vol.35 (10), p.10140-10150</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2023</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c294t-bc9788e4b951efbddc9962f340a6d53ff560fc89969eb830634c3f799c55707a3</citedby><cites>FETCH-LOGICAL-c294t-bc9788e4b951efbddc9962f340a6d53ff560fc89969eb830634c3f799c55707a3</cites><orcidid>0000-0003-3797-1348 ; 0000-0001-5067-2647 ; 0000-0001-6447-6559 ; 0000-0001-5983-5756 ; 0000-0002-6108-3589 ; 0000-0003-2507-1190 ; 0000-0003-1736-2641 ; 0000-0003-3697-4184</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10102307$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,776,780,792,27902,27903,54735</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/10102307$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Shui, Changjian</creatorcontrib><creatorcontrib>Pu, Ruizhi</creatorcontrib><creatorcontrib>Xu, Gezheng</creatorcontrib><creatorcontrib>Wen, Jun</creatorcontrib><creatorcontrib>Zhou, Fan</creatorcontrib><creatorcontrib>Gagne, Christian</creatorcontrib><creatorcontrib>Ling, Charles X.</creatorcontrib><creatorcontrib>Wang, Boyu</creatorcontrib><title>Towards More General Loss and Setting in Unsupervised Domain Adaptation</title><title>IEEE transactions on knowledge and data engineering</title><addtitle>TKDE</addtitle><description><![CDATA[In this article, we present an analysis of unsupervised domain adaptation with a series of theoretical and algorithmic results. We derive a novel Rényi-<inline-formula><tex-math notation="LaTeX">\alpha</tex-math> <mml:math> <mml:mi>α</mml:mi> </mml:math> <inline-graphic xlink:href="pu-ieq1-3266785.gif"/> </inline-formula> divergence-based generalization bound, which is tailored to domain adaptation algorithms with arbitrary loss functions in a stochastic setting. Moreover, our theoretical results provide new insights into the assumptions for successful domain adaptation: the closeness between the conditional distributions of the domains and the Lipschitzness on the source domain. With these assumptions, we reveal the following: if their conditional generation distributions are close, the Lipschitzness property of the target domain can be transferred from the Lipschitzness on the source domain, without knowing the exact target distribution. Motivated by our analysis and assumptions, we further derive practical principles for deep domain adaptation: 1) Rényi-2 adversarial training for marginal distributions matching and 2) Lipschitz regularization for the classifier. Our experimental results on both synthetic and real-world datasets support our theoretical findings and the practical efficiency of the proposed principles.]]></description><subject>Adaptation</subject><subject>Algorithms</subject><subject>Computer science</subject><subject>Divergence</subject><subject>Domain adaptation</subject><subject>Labeling</subject><subject>Principles</subject><subject>Regularization</subject><subject>representation learning</subject><subject>rényi divergence</subject><subject>Supervised learning</subject><subject>Task analysis</subject><subject>Training</subject><subject>Upper bound</subject><subject>Urban areas</subject><issn>1041-4347</issn><issn>1558-2191</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNpNkMFOwzAMhiMEEmPwAEgcInHucJKmSY7TNgZiiAPbOUpbB3Xa2pJ0IN6eTOPAyZb1_bb8EXLLYMIYmIf1y3wx4cDFRPCiUFqekRGTUmecGXaeeshZlotcXZKrGLcAoJVmI7Jcd98u1JG-dgHpElsMbkdXXYzUtTV9x2Fo2g_atHTTxkOP4auJWNN5t3dpNq1dP7ih6dprcuHdLuLNXx2TzeNiPXvKVm_L59l0lVXc5ENWVkZpjXlpJENf1nVlTMG9yMEVtRTeywJ8pdPQYKkFFCKvhFfGVFIqUE6Myf1pbx-6zwPGwW67Q2jTSct1IcFwEDpR7ERVIX0S0Ns-NHsXfiwDe_Rlj77s0Zf985Uyd6dMg4j_eJYgUOIX6uJl7Q</recordid><startdate>20231001</startdate><enddate>20231001</enddate><creator>Shui, Changjian</creator><creator>Pu, Ruizhi</creator><creator>Xu, Gezheng</creator><creator>Wen, Jun</creator><creator>Zhou, Fan</creator><creator>Gagne, Christian</creator><creator>Ling, Charles X.</creator><creator>Wang, Boyu</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><orcidid>https://orcid.org/0000-0003-3797-1348</orcidid><orcidid>https://orcid.org/0000-0001-5067-2647</orcidid><orcidid>https://orcid.org/0000-0001-6447-6559</orcidid><orcidid>https://orcid.org/0000-0001-5983-5756</orcidid><orcidid>https://orcid.org/0000-0002-6108-3589</orcidid><orcidid>https://orcid.org/0000-0003-2507-1190</orcidid><orcidid>https://orcid.org/0000-0003-1736-2641</orcidid><orcidid>https://orcid.org/0000-0003-3697-4184</orcidid></search><sort><creationdate>20231001</creationdate><title>Towards More General Loss and Setting in Unsupervised Domain Adaptation</title><author>Shui, Changjian ; Pu, Ruizhi ; Xu, Gezheng ; Wen, Jun ; Zhou, Fan ; Gagne, Christian ; Ling, Charles X. ; Wang, Boyu</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c294t-bc9788e4b951efbddc9962f340a6d53ff560fc89969eb830634c3f799c55707a3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Adaptation</topic><topic>Algorithms</topic><topic>Computer science</topic><topic>Divergence</topic><topic>Domain adaptation</topic><topic>Labeling</topic><topic>Principles</topic><topic>Regularization</topic><topic>representation learning</topic><topic>rényi divergence</topic><topic>Supervised learning</topic><topic>Task analysis</topic><topic>Training</topic><topic>Upper bound</topic><topic>Urban areas</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Shui, Changjian</creatorcontrib><creatorcontrib>Pu, Ruizhi</creatorcontrib><creatorcontrib>Xu, Gezheng</creatorcontrib><creatorcontrib>Wen, Jun</creatorcontrib><creatorcontrib>Zhou, Fan</creatorcontrib><creatorcontrib>Gagne, Christian</creatorcontrib><creatorcontrib>Ling, Charles X.</creatorcontrib><creatorcontrib>Wang, Boyu</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>IEEE transactions on knowledge and data engineering</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Shui, Changjian</au><au>Pu, Ruizhi</au><au>Xu, Gezheng</au><au>Wen, Jun</au><au>Zhou, Fan</au><au>Gagne, Christian</au><au>Ling, Charles X.</au><au>Wang, Boyu</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Towards More General Loss and Setting in Unsupervised Domain Adaptation</atitle><jtitle>IEEE transactions on knowledge and data engineering</jtitle><stitle>TKDE</stitle><date>2023-10-01</date><risdate>2023</risdate><volume>35</volume><issue>10</issue><spage>10140</spage><epage>10150</epage><pages>10140-10150</pages><issn>1041-4347</issn><eissn>1558-2191</eissn><coden>ITKEEH</coden><abstract><![CDATA[In this article, we present an analysis of unsupervised domain adaptation with a series of theoretical and algorithmic results. We derive a novel Rényi-<inline-formula><tex-math notation="LaTeX">\alpha</tex-math> <mml:math> <mml:mi>α</mml:mi> </mml:math> <inline-graphic xlink:href="pu-ieq1-3266785.gif"/> </inline-formula> divergence-based generalization bound, which is tailored to domain adaptation algorithms with arbitrary loss functions in a stochastic setting. Moreover, our theoretical results provide new insights into the assumptions for successful domain adaptation: the closeness between the conditional distributions of the domains and the Lipschitzness on the source domain. With these assumptions, we reveal the following: if their conditional generation distributions are close, the Lipschitzness property of the target domain can be transferred from the Lipschitzness on the source domain, without knowing the exact target distribution. Motivated by our analysis and assumptions, we further derive practical principles for deep domain adaptation: 1) Rényi-2 adversarial training for marginal distributions matching and 2) Lipschitz regularization for the classifier. Our experimental results on both synthetic and real-world datasets support our theoretical findings and the practical efficiency of the proposed principles.]]></abstract><cop>New York</cop><pub>IEEE</pub><doi>10.1109/TKDE.2023.3266785</doi><tpages>11</tpages><orcidid>https://orcid.org/0000-0003-3797-1348</orcidid><orcidid>https://orcid.org/0000-0001-5067-2647</orcidid><orcidid>https://orcid.org/0000-0001-6447-6559</orcidid><orcidid>https://orcid.org/0000-0001-5983-5756</orcidid><orcidid>https://orcid.org/0000-0002-6108-3589</orcidid><orcidid>https://orcid.org/0000-0003-2507-1190</orcidid><orcidid>https://orcid.org/0000-0003-1736-2641</orcidid><orcidid>https://orcid.org/0000-0003-3697-4184</orcidid></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 1041-4347
ispartof IEEE transactions on knowledge and data engineering, 2023-10, Vol.35 (10), p.10140-10150
issn 1041-4347
1558-2191
language eng
recordid cdi_crossref_primary_10_1109_TKDE_2023_3266785
source IEEE Electronic Library (IEL)
subjects Adaptation
Algorithms
Computer science
Divergence
Domain adaptation
Labeling
Principles
Regularization
representation learning
rényi divergence
Supervised learning
Task analysis
Training
Upper bound
Urban areas
title Towards More General Loss and Setting in Unsupervised Domain Adaptation
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-27T09%3A20%3A15IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Towards%20More%20General%20Loss%20and%20Setting%20in%20Unsupervised%20Domain%20Adaptation&rft.jtitle=IEEE%20transactions%20on%20knowledge%20and%20data%20engineering&rft.au=Shui,%20Changjian&rft.date=2023-10-01&rft.volume=35&rft.issue=10&rft.spage=10140&rft.epage=10150&rft.pages=10140-10150&rft.issn=1041-4347&rft.eissn=1558-2191&rft.coden=ITKEEH&rft_id=info:doi/10.1109/TKDE.2023.3266785&rft_dat=%3Cproquest_RIE%3E2865092038%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2865092038&rft_id=info:pmid/&rft_ieee_id=10102307&rfr_iscdi=true