Class-aware cross-domain target detection based on cityscape in fog
The semantic segmentation of unsupervised simulation to real-world adjustment (USRA) is designed to improve the training of simulation data in a real-world environment. In practical applications, such as robotic vision and autonomous driving, this could save the cost of manually annotating data. Reg...
Gespeichert in:
Veröffentlicht in: | Machine vision and applications 2023-11, Vol.34 (6), p.114, Article 114 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | 6 |
container_start_page | 114 |
container_title | Machine vision and applications |
container_volume | 34 |
creator | Gan, Linfeng Liu, Hu Chen, Aoran Xu, Xibin Zhang, Xuebiao |
description | The semantic segmentation of unsupervised simulation to real-world adjustment (USRA) is designed to improve the training of simulation data in a real-world environment. In practical applications, such as robotic vision and autonomous driving, this could save the cost of manually annotating data. Regular USRA's are often assumed to include large samples of unla-Beled's real-world data for training purposes. However, this assumption is incorrect because of the difficulties of collection and, in practice, data on some practices is still lacking. Therefore, our aim is to reduce the need for large amounts of real data, in the case of unsupervised simulation-real-world domain adaptability (USDA) and generalization (USDG) issues, which only exist in the real world. In order to make up for the limited actual data, this paper first constructs a pseudo-target domain, using a real data to achieve the simulation data style. Based on this method, this paper proposes a cross-domain interdomain randomization method based on class perception to extract domain invariant knowledge from simulated objects and virtual objects. We will demonstrate the effectiveness of our approach in USDA and USDG, such as Cityscapes and Foggy Cityscapes, which are far superior to existing technological means. |
doi_str_mv | 10.1007/s00138-023-01463-6 |
format | Article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2870573799</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2870573799</sourcerecordid><originalsourceid>FETCH-LOGICAL-c270t-a5fce3d0eb8e73c8c18d6586447aebfb1f96f0716992fc1f15d3138e3947df3f3</originalsourceid><addsrcrecordid>eNp9kEtLxDAUhYMoOI7-AVcF19Gbps1jKcUXDLjRdUiTm6HDTDsmGWT-vdEK7lzdszjnXM5HyDWDWwYg7xIA44pCzSmwRnAqTsiCNbymTAp9Shagi1ag63NykdIGABopmwXpuq1NidpPG7FycSraTzs7jFW2cY258pjR5WEaq94m9FURbsjH5Oweq2IL0_qSnAW7TXj1e5fk_fHhrXumq9enl-5-RV0tIVPbBofcA_YKJXfKMeVFq0TTSIt96FnQIoBkQus6OBZY63kZhVw30gce-JLczL37OH0cMGWzmQ5xLC9NrSS0kkuti6ueXT9rIgazj8POxqNhYL5hmRmWKbDMDywjSojPoVTM4xrjX_U_qS9_dmxU</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2870573799</pqid></control><display><type>article</type><title>Class-aware cross-domain target detection based on cityscape in fog</title><source>SpringerLink Journals - AutoHoldings</source><creator>Gan, Linfeng ; Liu, Hu ; Chen, Aoran ; Xu, Xibin ; Zhang, Xuebiao</creator><creatorcontrib>Gan, Linfeng ; Liu, Hu ; Chen, Aoran ; Xu, Xibin ; Zhang, Xuebiao</creatorcontrib><description>The semantic segmentation of unsupervised simulation to real-world adjustment (USRA) is designed to improve the training of simulation data in a real-world environment. In practical applications, such as robotic vision and autonomous driving, this could save the cost of manually annotating data. Regular USRA's are often assumed to include large samples of unla-Beled's real-world data for training purposes. However, this assumption is incorrect because of the difficulties of collection and, in practice, data on some practices is still lacking. Therefore, our aim is to reduce the need for large amounts of real data, in the case of unsupervised simulation-real-world domain adaptability (USDA) and generalization (USDG) issues, which only exist in the real world. In order to make up for the limited actual data, this paper first constructs a pseudo-target domain, using a real data to achieve the simulation data style. Based on this method, this paper proposes a cross-domain interdomain randomization method based on class perception to extract domain invariant knowledge from simulated objects and virtual objects. We will demonstrate the effectiveness of our approach in USDA and USDG, such as Cityscapes and Foggy Cityscapes, which are far superior to existing technological means.</description><identifier>ISSN: 0932-8092</identifier><identifier>EISSN: 1432-1769</identifier><identifier>DOI: 10.1007/s00138-023-01463-6</identifier><language>eng</language><publisher>Berlin/Heidelberg: Springer Berlin Heidelberg</publisher><subject>Communications Engineering ; Computer Science ; Fog ; Image Processing and Computer Vision ; Machine vision ; Networks ; Original Paper ; Pattern Recognition ; Semantic segmentation ; Simulation ; Target detection ; Training ; Vision systems</subject><ispartof>Machine vision and applications, 2023-11, Vol.34 (6), p.114, Article 114</ispartof><rights>The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature 2023. Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c270t-a5fce3d0eb8e73c8c18d6586447aebfb1f96f0716992fc1f15d3138e3947df3f3</cites><orcidid>0000-0002-0684-2298</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s00138-023-01463-6$$EPDF$$P50$$Gspringer$$H</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1007/s00138-023-01463-6$$EHTML$$P50$$Gspringer$$H</linktohtml><link.rule.ids>314,778,782,27907,27908,41471,42540,51302</link.rule.ids></links><search><creatorcontrib>Gan, Linfeng</creatorcontrib><creatorcontrib>Liu, Hu</creatorcontrib><creatorcontrib>Chen, Aoran</creatorcontrib><creatorcontrib>Xu, Xibin</creatorcontrib><creatorcontrib>Zhang, Xuebiao</creatorcontrib><title>Class-aware cross-domain target detection based on cityscape in fog</title><title>Machine vision and applications</title><addtitle>Machine Vision and Applications</addtitle><description>The semantic segmentation of unsupervised simulation to real-world adjustment (USRA) is designed to improve the training of simulation data in a real-world environment. In practical applications, such as robotic vision and autonomous driving, this could save the cost of manually annotating data. Regular USRA's are often assumed to include large samples of unla-Beled's real-world data for training purposes. However, this assumption is incorrect because of the difficulties of collection and, in practice, data on some practices is still lacking. Therefore, our aim is to reduce the need for large amounts of real data, in the case of unsupervised simulation-real-world domain adaptability (USDA) and generalization (USDG) issues, which only exist in the real world. In order to make up for the limited actual data, this paper first constructs a pseudo-target domain, using a real data to achieve the simulation data style. Based on this method, this paper proposes a cross-domain interdomain randomization method based on class perception to extract domain invariant knowledge from simulated objects and virtual objects. We will demonstrate the effectiveness of our approach in USDA and USDG, such as Cityscapes and Foggy Cityscapes, which are far superior to existing technological means.</description><subject>Communications Engineering</subject><subject>Computer Science</subject><subject>Fog</subject><subject>Image Processing and Computer Vision</subject><subject>Machine vision</subject><subject>Networks</subject><subject>Original Paper</subject><subject>Pattern Recognition</subject><subject>Semantic segmentation</subject><subject>Simulation</subject><subject>Target detection</subject><subject>Training</subject><subject>Vision systems</subject><issn>0932-8092</issn><issn>1432-1769</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>AFKRA</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNp9kEtLxDAUhYMoOI7-AVcF19Gbps1jKcUXDLjRdUiTm6HDTDsmGWT-vdEK7lzdszjnXM5HyDWDWwYg7xIA44pCzSmwRnAqTsiCNbymTAp9Shagi1ag63NykdIGABopmwXpuq1NidpPG7FycSraTzs7jFW2cY258pjR5WEaq94m9FURbsjH5Oweq2IL0_qSnAW7TXj1e5fk_fHhrXumq9enl-5-RV0tIVPbBofcA_YKJXfKMeVFq0TTSIt96FnQIoBkQus6OBZY63kZhVw30gce-JLczL37OH0cMGWzmQ5xLC9NrSS0kkuti6ueXT9rIgazj8POxqNhYL5hmRmWKbDMDywjSojPoVTM4xrjX_U_qS9_dmxU</recordid><startdate>20231101</startdate><enddate>20231101</enddate><creator>Gan, Linfeng</creator><creator>Liu, Hu</creator><creator>Chen, Aoran</creator><creator>Xu, Xibin</creator><creator>Zhang, Xuebiao</creator><general>Springer Berlin Heidelberg</general><general>Springer Nature B.V</general><scope>AAYXX</scope><scope>CITATION</scope><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>P5Z</scope><scope>P62</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PTHSS</scope><orcidid>https://orcid.org/0000-0002-0684-2298</orcidid></search><sort><creationdate>20231101</creationdate><title>Class-aware cross-domain target detection based on cityscape in fog</title><author>Gan, Linfeng ; Liu, Hu ; Chen, Aoran ; Xu, Xibin ; Zhang, Xuebiao</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c270t-a5fce3d0eb8e73c8c18d6586447aebfb1f96f0716992fc1f15d3138e3947df3f3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Communications Engineering</topic><topic>Computer Science</topic><topic>Fog</topic><topic>Image Processing and Computer Vision</topic><topic>Machine vision</topic><topic>Networks</topic><topic>Original Paper</topic><topic>Pattern Recognition</topic><topic>Semantic segmentation</topic><topic>Simulation</topic><topic>Target detection</topic><topic>Training</topic><topic>Vision systems</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Gan, Linfeng</creatorcontrib><creatorcontrib>Liu, Hu</creatorcontrib><creatorcontrib>Chen, Aoran</creatorcontrib><creatorcontrib>Xu, Xibin</creatorcontrib><creatorcontrib>Zhang, Xuebiao</creatorcontrib><collection>CrossRef</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies & Aerospace Collection</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Advanced Technologies & Aerospace Database</collection><collection>ProQuest Advanced Technologies & Aerospace Collection</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>Engineering Collection</collection><jtitle>Machine vision and applications</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Gan, Linfeng</au><au>Liu, Hu</au><au>Chen, Aoran</au><au>Xu, Xibin</au><au>Zhang, Xuebiao</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Class-aware cross-domain target detection based on cityscape in fog</atitle><jtitle>Machine vision and applications</jtitle><stitle>Machine Vision and Applications</stitle><date>2023-11-01</date><risdate>2023</risdate><volume>34</volume><issue>6</issue><spage>114</spage><pages>114-</pages><artnum>114</artnum><issn>0932-8092</issn><eissn>1432-1769</eissn><abstract>The semantic segmentation of unsupervised simulation to real-world adjustment (USRA) is designed to improve the training of simulation data in a real-world environment. In practical applications, such as robotic vision and autonomous driving, this could save the cost of manually annotating data. Regular USRA's are often assumed to include large samples of unla-Beled's real-world data for training purposes. However, this assumption is incorrect because of the difficulties of collection and, in practice, data on some practices is still lacking. Therefore, our aim is to reduce the need for large amounts of real data, in the case of unsupervised simulation-real-world domain adaptability (USDA) and generalization (USDG) issues, which only exist in the real world. In order to make up for the limited actual data, this paper first constructs a pseudo-target domain, using a real data to achieve the simulation data style. Based on this method, this paper proposes a cross-domain interdomain randomization method based on class perception to extract domain invariant knowledge from simulated objects and virtual objects. We will demonstrate the effectiveness of our approach in USDA and USDG, such as Cityscapes and Foggy Cityscapes, which are far superior to existing technological means.</abstract><cop>Berlin/Heidelberg</cop><pub>Springer Berlin Heidelberg</pub><doi>10.1007/s00138-023-01463-6</doi><orcidid>https://orcid.org/0000-0002-0684-2298</orcidid></addata></record> |
fulltext | fulltext |
identifier | ISSN: 0932-8092 |
ispartof | Machine vision and applications, 2023-11, Vol.34 (6), p.114, Article 114 |
issn | 0932-8092 1432-1769 |
language | eng |
recordid | cdi_proquest_journals_2870573799 |
source | SpringerLink Journals - AutoHoldings |
subjects | Communications Engineering Computer Science Fog Image Processing and Computer Vision Machine vision Networks Original Paper Pattern Recognition Semantic segmentation Simulation Target detection Training Vision systems |
title | Class-aware cross-domain target detection based on cityscape in fog |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-17T00%3A27%3A44IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Class-aware%20cross-domain%20target%20detection%20based%20on%20cityscape%20in%20fog&rft.jtitle=Machine%20vision%20and%20applications&rft.au=Gan,%20Linfeng&rft.date=2023-11-01&rft.volume=34&rft.issue=6&rft.spage=114&rft.pages=114-&rft.artnum=114&rft.issn=0932-8092&rft.eissn=1432-1769&rft_id=info:doi/10.1007/s00138-023-01463-6&rft_dat=%3Cproquest_cross%3E2870573799%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2870573799&rft_id=info:pmid/&rfr_iscdi=true |