Domain Stylization: A Fast Covariance Matching Framework Towards Domain Adaptation
Generating computer graphics (CG) rendered synthetic images has been widely used to create simulation environments for robotics/autonomous driving and generate labeled data. Yet, the problem of training models purely with synthetic data remains challenging due to the considerable domain gaps caused...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on pattern analysis and machine intelligence 2021-07, Vol.43 (7), p.2360-2372 |
---|---|
Hauptverfasser: | , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 2372 |
---|---|
container_issue | 7 |
container_start_page | 2360 |
container_title | IEEE transactions on pattern analysis and machine intelligence |
container_volume | 43 |
creator | Dundar, Aysegul Liu, Ming-Yu Yu, Zhiding Wang, Ting-Chun Zedlewski, John Kautz, Jan |
description | Generating computer graphics (CG) rendered synthetic images has been widely used to create simulation environments for robotics/autonomous driving and generate labeled data. Yet, the problem of training models purely with synthetic data remains challenging due to the considerable domain gaps caused by current limitations on rendering. In this paper, we propose a simple yet effective domain adaptation framework towards closing such gap at image level. Unlike many GAN-based approaches, our method aims to match the covariance of the universal feature embeddings across domains, making the adaptation a fast, convenient step and avoiding the need for potentially difficult GAN training. To align domains more precisely, we further propose a conditional covariance matching framework which iteratively estimates semantic segmentation regions and conditionally matches the class-wise feature covariance given the segmentation regions. We demonstrate that both tasks can mutually refine and considerably improve each other, leading to state-of-the-art domain adaptation results. Extensive experiments under multiple synthetic-to-real settings show that our approach exceeds the performance of latest domain adaptation approaches. In addition, we offer a quantitative analysis where our framework shows considerable reduction in Frechet Inception distance between source and target domains, demonstrating the effectiveness of this work in bridging the synthetic-to-real domain gap. |
doi_str_mv | 10.1109/TPAMI.2020.2969421 |
format | Article |
fullrecord | <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_crossref_primary_10_1109_TPAMI_2020_2969421</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>8968319</ieee_id><sourcerecordid>2348801077</sourcerecordid><originalsourceid>FETCH-LOGICAL-c417t-397b7f4cdab3c7162eca412e4d5304958c8c983603a564de19b852d27922e1bb3</originalsourceid><addsrcrecordid>eNpdkM9PwjAYhhujEUT_AU3MEi9ehv21rfW2oCgJRKN4brquaJGt2G4S_OsdMDl4-g7f87558wBwjmAfIchvps_pZNTHEMM-5jGnGB2ALuKEhyQi_BB0IYpxyBhmHXDi_RxCRCNIjkGHIM4jmsRd8HJnC2nK4LVaL8yPrIwtb4M0GEpfBQP7LZ2RpdLBRFbqw5TvwdDJQq-s-wymdiVd7oO2IM3lstrmT8HRTC68PmtvD7wN76eDx3D89DAapONQUZRUIeFJlsyoymVGVNIs1UpShDXNIwIpj5hiijMSQyKjmOYa8YxFOMcJx1ijLCM9cL3rXTr7VWtficJ4pRcLWWpbe4EJZQwimCQNevUPndvalc06gRtVJEIxYg2Fd5Ry1nunZ2LpTCHdWiAoNsbF1rjYGBet8SZ02VbXWaHzfeRPcQNc7ACjtd6_GY9Zg5BfJ7mDbw</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2539351618</pqid></control><display><type>article</type><title>Domain Stylization: A Fast Covariance Matching Framework Towards Domain Adaptation</title><source>IEEE Electronic Library (IEL)</source><creator>Dundar, Aysegul ; Liu, Ming-Yu ; Yu, Zhiding ; Wang, Ting-Chun ; Zedlewski, John ; Kautz, Jan</creator><creatorcontrib>Dundar, Aysegul ; Liu, Ming-Yu ; Yu, Zhiding ; Wang, Ting-Chun ; Zedlewski, John ; Kautz, Jan</creatorcontrib><description>Generating computer graphics (CG) rendered synthetic images has been widely used to create simulation environments for robotics/autonomous driving and generate labeled data. Yet, the problem of training models purely with synthetic data remains challenging due to the considerable domain gaps caused by current limitations on rendering. In this paper, we propose a simple yet effective domain adaptation framework towards closing such gap at image level. Unlike many GAN-based approaches, our method aims to match the covariance of the universal feature embeddings across domains, making the adaptation a fast, convenient step and avoiding the need for potentially difficult GAN training. To align domains more precisely, we further propose a conditional covariance matching framework which iteratively estimates semantic segmentation regions and conditionally matches the class-wise feature covariance given the segmentation regions. We demonstrate that both tasks can mutually refine and considerably improve each other, leading to state-of-the-art domain adaptation results. Extensive experiments under multiple synthetic-to-real settings show that our approach exceeds the performance of latest domain adaptation approaches. In addition, we offer a quantitative analysis where our framework shows considerable reduction in Frechet Inception distance between source and target domains, demonstrating the effectiveness of this work in bridging the synthetic-to-real domain gap.</description><identifier>ISSN: 0162-8828</identifier><identifier>EISSN: 1939-3539</identifier><identifier>EISSN: 2160-9292</identifier><identifier>DOI: 10.1109/TPAMI.2020.2969421</identifier><identifier>PMID: 31995476</identifier><identifier>CODEN: ITPIDJ</identifier><language>eng</language><publisher>United States: IEEE</publisher><subject>Adaptation ; Adaptation models ; Computer graphics ; Covariance ; Data models ; Domain adaptation ; Domains ; Gallium nitride ; Image segmentation ; image stylization ; Matching ; object detection ; Robotics ; semantic segmentation ; Semantics ; Task analysis ; Training</subject><ispartof>IEEE transactions on pattern analysis and machine intelligence, 2021-07, Vol.43 (7), p.2360-2372</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2021</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c417t-397b7f4cdab3c7162eca412e4d5304958c8c983603a564de19b852d27922e1bb3</citedby><cites>FETCH-LOGICAL-c417t-397b7f4cdab3c7162eca412e4d5304958c8c983603a564de19b852d27922e1bb3</cites><orcidid>0000-0002-8830-429X ; 0000-0003-2014-6325 ; 0000-0002-2951-2398</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/8968319$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,796,27924,27925,54758</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/8968319$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/31995476$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Dundar, Aysegul</creatorcontrib><creatorcontrib>Liu, Ming-Yu</creatorcontrib><creatorcontrib>Yu, Zhiding</creatorcontrib><creatorcontrib>Wang, Ting-Chun</creatorcontrib><creatorcontrib>Zedlewski, John</creatorcontrib><creatorcontrib>Kautz, Jan</creatorcontrib><title>Domain Stylization: A Fast Covariance Matching Framework Towards Domain Adaptation</title><title>IEEE transactions on pattern analysis and machine intelligence</title><addtitle>TPAMI</addtitle><addtitle>IEEE Trans Pattern Anal Mach Intell</addtitle><description>Generating computer graphics (CG) rendered synthetic images has been widely used to create simulation environments for robotics/autonomous driving and generate labeled data. Yet, the problem of training models purely with synthetic data remains challenging due to the considerable domain gaps caused by current limitations on rendering. In this paper, we propose a simple yet effective domain adaptation framework towards closing such gap at image level. Unlike many GAN-based approaches, our method aims to match the covariance of the universal feature embeddings across domains, making the adaptation a fast, convenient step and avoiding the need for potentially difficult GAN training. To align domains more precisely, we further propose a conditional covariance matching framework which iteratively estimates semantic segmentation regions and conditionally matches the class-wise feature covariance given the segmentation regions. We demonstrate that both tasks can mutually refine and considerably improve each other, leading to state-of-the-art domain adaptation results. Extensive experiments under multiple synthetic-to-real settings show that our approach exceeds the performance of latest domain adaptation approaches. In addition, we offer a quantitative analysis where our framework shows considerable reduction in Frechet Inception distance between source and target domains, demonstrating the effectiveness of this work in bridging the synthetic-to-real domain gap.</description><subject>Adaptation</subject><subject>Adaptation models</subject><subject>Computer graphics</subject><subject>Covariance</subject><subject>Data models</subject><subject>Domain adaptation</subject><subject>Domains</subject><subject>Gallium nitride</subject><subject>Image segmentation</subject><subject>image stylization</subject><subject>Matching</subject><subject>object detection</subject><subject>Robotics</subject><subject>semantic segmentation</subject><subject>Semantics</subject><subject>Task analysis</subject><subject>Training</subject><issn>0162-8828</issn><issn>1939-3539</issn><issn>2160-9292</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNpdkM9PwjAYhhujEUT_AU3MEi9ehv21rfW2oCgJRKN4brquaJGt2G4S_OsdMDl4-g7f87558wBwjmAfIchvps_pZNTHEMM-5jGnGB2ALuKEhyQi_BB0IYpxyBhmHXDi_RxCRCNIjkGHIM4jmsRd8HJnC2nK4LVaL8yPrIwtb4M0GEpfBQP7LZ2RpdLBRFbqw5TvwdDJQq-s-wymdiVd7oO2IM3lstrmT8HRTC68PmtvD7wN76eDx3D89DAapONQUZRUIeFJlsyoymVGVNIs1UpShDXNIwIpj5hiijMSQyKjmOYa8YxFOMcJx1ijLCM9cL3rXTr7VWtficJ4pRcLWWpbe4EJZQwimCQNevUPndvalc06gRtVJEIxYg2Fd5Ry1nunZ2LpTCHdWiAoNsbF1rjYGBet8SZ02VbXWaHzfeRPcQNc7ACjtd6_GY9Zg5BfJ7mDbw</recordid><startdate>20210701</startdate><enddate>20210701</enddate><creator>Dundar, Aysegul</creator><creator>Liu, Ming-Yu</creator><creator>Yu, Zhiding</creator><creator>Wang, Ting-Chun</creator><creator>Zedlewski, John</creator><creator>Kautz, Jan</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0002-8830-429X</orcidid><orcidid>https://orcid.org/0000-0003-2014-6325</orcidid><orcidid>https://orcid.org/0000-0002-2951-2398</orcidid></search><sort><creationdate>20210701</creationdate><title>Domain Stylization: A Fast Covariance Matching Framework Towards Domain Adaptation</title><author>Dundar, Aysegul ; Liu, Ming-Yu ; Yu, Zhiding ; Wang, Ting-Chun ; Zedlewski, John ; Kautz, Jan</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c417t-397b7f4cdab3c7162eca412e4d5304958c8c983603a564de19b852d27922e1bb3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Adaptation</topic><topic>Adaptation models</topic><topic>Computer graphics</topic><topic>Covariance</topic><topic>Data models</topic><topic>Domain adaptation</topic><topic>Domains</topic><topic>Gallium nitride</topic><topic>Image segmentation</topic><topic>image stylization</topic><topic>Matching</topic><topic>object detection</topic><topic>Robotics</topic><topic>semantic segmentation</topic><topic>Semantics</topic><topic>Task analysis</topic><topic>Training</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Dundar, Aysegul</creatorcontrib><creatorcontrib>Liu, Ming-Yu</creatorcontrib><creatorcontrib>Yu, Zhiding</creatorcontrib><creatorcontrib>Wang, Ting-Chun</creatorcontrib><creatorcontrib>Zedlewski, John</creatorcontrib><creatorcontrib>Kautz, Jan</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998–Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics & Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>MEDLINE - Academic</collection><jtitle>IEEE transactions on pattern analysis and machine intelligence</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Dundar, Aysegul</au><au>Liu, Ming-Yu</au><au>Yu, Zhiding</au><au>Wang, Ting-Chun</au><au>Zedlewski, John</au><au>Kautz, Jan</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Domain Stylization: A Fast Covariance Matching Framework Towards Domain Adaptation</atitle><jtitle>IEEE transactions on pattern analysis and machine intelligence</jtitle><stitle>TPAMI</stitle><addtitle>IEEE Trans Pattern Anal Mach Intell</addtitle><date>2021-07-01</date><risdate>2021</risdate><volume>43</volume><issue>7</issue><spage>2360</spage><epage>2372</epage><pages>2360-2372</pages><issn>0162-8828</issn><eissn>1939-3539</eissn><eissn>2160-9292</eissn><coden>ITPIDJ</coden><abstract>Generating computer graphics (CG) rendered synthetic images has been widely used to create simulation environments for robotics/autonomous driving and generate labeled data. Yet, the problem of training models purely with synthetic data remains challenging due to the considerable domain gaps caused by current limitations on rendering. In this paper, we propose a simple yet effective domain adaptation framework towards closing such gap at image level. Unlike many GAN-based approaches, our method aims to match the covariance of the universal feature embeddings across domains, making the adaptation a fast, convenient step and avoiding the need for potentially difficult GAN training. To align domains more precisely, we further propose a conditional covariance matching framework which iteratively estimates semantic segmentation regions and conditionally matches the class-wise feature covariance given the segmentation regions. We demonstrate that both tasks can mutually refine and considerably improve each other, leading to state-of-the-art domain adaptation results. Extensive experiments under multiple synthetic-to-real settings show that our approach exceeds the performance of latest domain adaptation approaches. In addition, we offer a quantitative analysis where our framework shows considerable reduction in Frechet Inception distance between source and target domains, demonstrating the effectiveness of this work in bridging the synthetic-to-real domain gap.</abstract><cop>United States</cop><pub>IEEE</pub><pmid>31995476</pmid><doi>10.1109/TPAMI.2020.2969421</doi><tpages>13</tpages><orcidid>https://orcid.org/0000-0002-8830-429X</orcidid><orcidid>https://orcid.org/0000-0003-2014-6325</orcidid><orcidid>https://orcid.org/0000-0002-2951-2398</orcidid></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | ISSN: 0162-8828 |
ispartof | IEEE transactions on pattern analysis and machine intelligence, 2021-07, Vol.43 (7), p.2360-2372 |
issn | 0162-8828 1939-3539 2160-9292 |
language | eng |
recordid | cdi_crossref_primary_10_1109_TPAMI_2020_2969421 |
source | IEEE Electronic Library (IEL) |
subjects | Adaptation Adaptation models Computer graphics Covariance Data models Domain adaptation Domains Gallium nitride Image segmentation image stylization Matching object detection Robotics semantic segmentation Semantics Task analysis Training |
title | Domain Stylization: A Fast Covariance Matching Framework Towards Domain Adaptation |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-01T04%3A50%3A00IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Domain%20Stylization:%20A%20Fast%20Covariance%20Matching%20Framework%20Towards%20Domain%20Adaptation&rft.jtitle=IEEE%20transactions%20on%20pattern%20analysis%20and%20machine%20intelligence&rft.au=Dundar,%20Aysegul&rft.date=2021-07-01&rft.volume=43&rft.issue=7&rft.spage=2360&rft.epage=2372&rft.pages=2360-2372&rft.issn=0162-8828&rft.eissn=1939-3539&rft.coden=ITPIDJ&rft_id=info:doi/10.1109/TPAMI.2020.2969421&rft_dat=%3Cproquest_RIE%3E2348801077%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2539351618&rft_id=info:pmid/31995476&rft_ieee_id=8968319&rfr_iscdi=true |