Maximal Domain Independent Representations Improve Transfer Learning
The most effective domain adaptation (DA) involves the decomposition of data representation into a domain independent representation (DIRep), and a domain dependent representation (DDRep). A classifier is trained by using the DIRep of the labeled source images. Since the DIRep is domain invariant, t...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Li, Adrian Shuai Bertino, Elisa Dang, Xuan-Hong Singla, Ankush Tu, Yuhai Wegman, Mark N |
description | The most effective domain adaptation (DA) involves the decomposition of data
representation into a domain independent representation (DIRep), and a domain
dependent representation (DDRep). A classifier is trained by using the DIRep of
the labeled source images. Since the DIRep is domain invariant, the classifier
can be "transferred" to make predictions for the target domain with no (or few)
labels. However, information useful for classification in the target domain can
"hide" in the DDRep in current DA algorithms such as Domain-Separation-Networks
(DSN). DSN's weak constraint to enforce orthogonality of DIRep and DDRep,
allows this hiding and can result in poor performance. To address this
shortcoming, we developed a new algorithm wherein a stronger constraint is
imposed to minimize the DDRep by using a KL divergent loss for the DDRep in
order to create the maximal DIRep that enhances transfer learning performance.
By using synthetic data sets, we show explicitly that depending on
initialization DSN with its weaker constraint can lead to sub-optimal solutions
with poorer DA performance whereas our algorithm with maximal DIRep is robust
against such perturbations. We demonstrate the equal-or-better performance of
our approach against state-of-the-art algorithms by using several standard
benchmark image datasets including Office. We further highlight the
compatibility of our algorithm with pretrained models, extending its
applicability and versatility in real-world scenarios. |
doi_str_mv | 10.48550/arxiv.2306.00262 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2306_00262</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2306_00262</sourcerecordid><originalsourceid>FETCH-LOGICAL-a672-50f55b22afbf27fe4f59af8831a1d6b45790599299b56d2a352ef995c223cfad3</originalsourceid><addsrcrecordid>eNotj81ugzAQhH3poUr7AD3VLwA1a9bgY5X0B4moUsUdLWG3QgoGmShK37407WVmTp_mU-ohM2leIponipfhnII1LjUGHNyq3Z4uw0hHvZtGGoKuQs8zrxFO-pPnyMu66DRMYdHVOMfpzLqJFBbhqGumGIbwdaduhI4L3__3RjWvL832Pak_3qrtc52QKyBBI4gdAEknUAjngp6kLG1GWe-6HAtv0HvwvkPXA1kEFu_xAGAPQr3dqMc_7FWjneN6PH63vzrtVcf-AKbuRbg</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Maximal Domain Independent Representations Improve Transfer Learning</title><source>arXiv.org</source><creator>Li, Adrian Shuai ; Bertino, Elisa ; Dang, Xuan-Hong ; Singla, Ankush ; Tu, Yuhai ; Wegman, Mark N</creator><creatorcontrib>Li, Adrian Shuai ; Bertino, Elisa ; Dang, Xuan-Hong ; Singla, Ankush ; Tu, Yuhai ; Wegman, Mark N</creatorcontrib><description>The most effective domain adaptation (DA) involves the decomposition of data
representation into a domain independent representation (DIRep), and a domain
dependent representation (DDRep). A classifier is trained by using the DIRep of
the labeled source images. Since the DIRep is domain invariant, the classifier
can be "transferred" to make predictions for the target domain with no (or few)
labels. However, information useful for classification in the target domain can
"hide" in the DDRep in current DA algorithms such as Domain-Separation-Networks
(DSN). DSN's weak constraint to enforce orthogonality of DIRep and DDRep,
allows this hiding and can result in poor performance. To address this
shortcoming, we developed a new algorithm wherein a stronger constraint is
imposed to minimize the DDRep by using a KL divergent loss for the DDRep in
order to create the maximal DIRep that enhances transfer learning performance.
By using synthetic data sets, we show explicitly that depending on
initialization DSN with its weaker constraint can lead to sub-optimal solutions
with poorer DA performance whereas our algorithm with maximal DIRep is robust
against such perturbations. We demonstrate the equal-or-better performance of
our approach against state-of-the-art algorithms by using several standard
benchmark image datasets including Office. We further highlight the
compatibility of our algorithm with pretrained models, extending its
applicability and versatility in real-world scenarios.</description><identifier>DOI: 10.48550/arxiv.2306.00262</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition ; Computer Science - Learning</subject><creationdate>2023-05</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,777,882</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2306.00262$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2306.00262$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Li, Adrian Shuai</creatorcontrib><creatorcontrib>Bertino, Elisa</creatorcontrib><creatorcontrib>Dang, Xuan-Hong</creatorcontrib><creatorcontrib>Singla, Ankush</creatorcontrib><creatorcontrib>Tu, Yuhai</creatorcontrib><creatorcontrib>Wegman, Mark N</creatorcontrib><title>Maximal Domain Independent Representations Improve Transfer Learning</title><description>The most effective domain adaptation (DA) involves the decomposition of data
representation into a domain independent representation (DIRep), and a domain
dependent representation (DDRep). A classifier is trained by using the DIRep of
the labeled source images. Since the DIRep is domain invariant, the classifier
can be "transferred" to make predictions for the target domain with no (or few)
labels. However, information useful for classification in the target domain can
"hide" in the DDRep in current DA algorithms such as Domain-Separation-Networks
(DSN). DSN's weak constraint to enforce orthogonality of DIRep and DDRep,
allows this hiding and can result in poor performance. To address this
shortcoming, we developed a new algorithm wherein a stronger constraint is
imposed to minimize the DDRep by using a KL divergent loss for the DDRep in
order to create the maximal DIRep that enhances transfer learning performance.
By using synthetic data sets, we show explicitly that depending on
initialization DSN with its weaker constraint can lead to sub-optimal solutions
with poorer DA performance whereas our algorithm with maximal DIRep is robust
against such perturbations. We demonstrate the equal-or-better performance of
our approach against state-of-the-art algorithms by using several standard
benchmark image datasets including Office. We further highlight the
compatibility of our algorithm with pretrained models, extending its
applicability and versatility in real-world scenarios.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj81ugzAQhH3poUr7AD3VLwA1a9bgY5X0B4moUsUdLWG3QgoGmShK37407WVmTp_mU-ohM2leIponipfhnII1LjUGHNyq3Z4uw0hHvZtGGoKuQs8zrxFO-pPnyMu66DRMYdHVOMfpzLqJFBbhqGumGIbwdaduhI4L3__3RjWvL832Pak_3qrtc52QKyBBI4gdAEknUAjngp6kLG1GWe-6HAtv0HvwvkPXA1kEFu_xAGAPQr3dqMc_7FWjneN6PH63vzrtVcf-AKbuRbg</recordid><startdate>20230531</startdate><enddate>20230531</enddate><creator>Li, Adrian Shuai</creator><creator>Bertino, Elisa</creator><creator>Dang, Xuan-Hong</creator><creator>Singla, Ankush</creator><creator>Tu, Yuhai</creator><creator>Wegman, Mark N</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20230531</creationdate><title>Maximal Domain Independent Representations Improve Transfer Learning</title><author>Li, Adrian Shuai ; Bertino, Elisa ; Dang, Xuan-Hong ; Singla, Ankush ; Tu, Yuhai ; Wegman, Mark N</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a672-50f55b22afbf27fe4f59af8831a1d6b45790599299b56d2a352ef995c223cfad3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Li, Adrian Shuai</creatorcontrib><creatorcontrib>Bertino, Elisa</creatorcontrib><creatorcontrib>Dang, Xuan-Hong</creatorcontrib><creatorcontrib>Singla, Ankush</creatorcontrib><creatorcontrib>Tu, Yuhai</creatorcontrib><creatorcontrib>Wegman, Mark N</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Li, Adrian Shuai</au><au>Bertino, Elisa</au><au>Dang, Xuan-Hong</au><au>Singla, Ankush</au><au>Tu, Yuhai</au><au>Wegman, Mark N</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Maximal Domain Independent Representations Improve Transfer Learning</atitle><date>2023-05-31</date><risdate>2023</risdate><abstract>The most effective domain adaptation (DA) involves the decomposition of data
representation into a domain independent representation (DIRep), and a domain
dependent representation (DDRep). A classifier is trained by using the DIRep of
the labeled source images. Since the DIRep is domain invariant, the classifier
can be "transferred" to make predictions for the target domain with no (or few)
labels. However, information useful for classification in the target domain can
"hide" in the DDRep in current DA algorithms such as Domain-Separation-Networks
(DSN). DSN's weak constraint to enforce orthogonality of DIRep and DDRep,
allows this hiding and can result in poor performance. To address this
shortcoming, we developed a new algorithm wherein a stronger constraint is
imposed to minimize the DDRep by using a KL divergent loss for the DDRep in
order to create the maximal DIRep that enhances transfer learning performance.
By using synthetic data sets, we show explicitly that depending on
initialization DSN with its weaker constraint can lead to sub-optimal solutions
with poorer DA performance whereas our algorithm with maximal DIRep is robust
against such perturbations. We demonstrate the equal-or-better performance of
our approach against state-of-the-art algorithms by using several standard
benchmark image datasets including Office. We further highlight the
compatibility of our algorithm with pretrained models, extending its
applicability and versatility in real-world scenarios.</abstract><doi>10.48550/arxiv.2306.00262</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2306.00262 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2306_00262 |
source | arXiv.org |
subjects | Computer Science - Computer Vision and Pattern Recognition Computer Science - Learning |
title | Maximal Domain Independent Representations Improve Transfer Learning |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-21T01%3A53%3A59IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Maximal%20Domain%20Independent%20Representations%20Improve%20Transfer%20Learning&rft.au=Li,%20Adrian%20Shuai&rft.date=2023-05-31&rft_id=info:doi/10.48550/arxiv.2306.00262&rft_dat=%3Carxiv_GOX%3E2306_00262%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |