AWADA: Attention-Weighted Adversarial Domain Adaptation for Object Detection
Object detection networks have reached an impressive performance level, yet a lack of suitable data in specific applications often limits it in practice. Typically, additional data sources are utilized to support the training task. In these, however, domain gaps between different data sources pose a...
Gespeichert in:
Veröffentlicht in: | arXiv.org 2022-08 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Menke, Maximilian Wenzel, Thomas Schwung, Andreas |
description | Object detection networks have reached an impressive performance level, yet a lack of suitable data in specific applications often limits it in practice. Typically, additional data sources are utilized to support the training task. In these, however, domain gaps between different data sources pose a challenge in deep learning. GAN-based image-to-image style-transfer is commonly applied to shrink the domain gap, but is unstable and decoupled from the object detection task. We propose AWADA, an Attention-Weighted Adversarial Domain Adaptation framework for creating a feedback loop between style-transformation and detection task. By constructing foreground object attention maps from object detector proposals, we focus the transformation on foreground object regions and stabilize style-transfer training. In extensive experiments and ablation studies, we show that AWADA reaches state-of-the-art unsupervised domain adaptation object detection performance in the commonly used benchmarks for tasks such as synthetic-to-real, adverse weather and cross-camera adaptation. |
format | Article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2708877282</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2708877282</sourcerecordid><originalsourceid>FETCH-proquest_journals_27088772823</originalsourceid><addsrcrecordid>eNqNirEKwjAUAIMgWLT_EHAuxFdrgluwioPgInQs0b5qSk1qkvr9RvADnA7ubkISyPNVJtYAM5J63zHGYMOhKPKEnGQlS7mlMgQ0QVuTVajvj4ANlc0bnVdOq56W9qm0iUoNQX032lpHz9cOb4GWGCKiXJBpq3qP6Y9zsjzsL7tjNjj7GtGHurOjMzHVwJkQnIOA_L_rA9QiPMs</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2708877282</pqid></control><display><type>article</type><title>AWADA: Attention-Weighted Adversarial Domain Adaptation for Object Detection</title><source>Free E- Journals</source><creator>Menke, Maximilian ; Wenzel, Thomas ; Schwung, Andreas</creator><creatorcontrib>Menke, Maximilian ; Wenzel, Thomas ; Schwung, Andreas</creatorcontrib><description>Object detection networks have reached an impressive performance level, yet a lack of suitable data in specific applications often limits it in practice. Typically, additional data sources are utilized to support the training task. In these, however, domain gaps between different data sources pose a challenge in deep learning. GAN-based image-to-image style-transfer is commonly applied to shrink the domain gap, but is unstable and decoupled from the object detection task. We propose AWADA, an Attention-Weighted Adversarial Domain Adaptation framework for creating a feedback loop between style-transformation and detection task. By constructing foreground object attention maps from object detector proposals, we focus the transformation on foreground object regions and stabilize style-transfer training. In extensive experiments and ablation studies, we show that AWADA reaches state-of-the-art unsupervised domain adaptation object detection performance in the commonly used benchmarks for tasks such as synthetic-to-real, adverse weather and cross-camera adaptation.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Ablation ; Adaptation ; Data sources ; Domains ; Feedback loops ; Object recognition ; Training ; Transformations (mathematics)</subject><ispartof>arXiv.org, 2022-08</ispartof><rights>2022. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>776,780</link.rule.ids></links><search><creatorcontrib>Menke, Maximilian</creatorcontrib><creatorcontrib>Wenzel, Thomas</creatorcontrib><creatorcontrib>Schwung, Andreas</creatorcontrib><title>AWADA: Attention-Weighted Adversarial Domain Adaptation for Object Detection</title><title>arXiv.org</title><description>Object detection networks have reached an impressive performance level, yet a lack of suitable data in specific applications often limits it in practice. Typically, additional data sources are utilized to support the training task. In these, however, domain gaps between different data sources pose a challenge in deep learning. GAN-based image-to-image style-transfer is commonly applied to shrink the domain gap, but is unstable and decoupled from the object detection task. We propose AWADA, an Attention-Weighted Adversarial Domain Adaptation framework for creating a feedback loop between style-transformation and detection task. By constructing foreground object attention maps from object detector proposals, we focus the transformation on foreground object regions and stabilize style-transfer training. In extensive experiments and ablation studies, we show that AWADA reaches state-of-the-art unsupervised domain adaptation object detection performance in the commonly used benchmarks for tasks such as synthetic-to-real, adverse weather and cross-camera adaptation.</description><subject>Ablation</subject><subject>Adaptation</subject><subject>Data sources</subject><subject>Domains</subject><subject>Feedback loops</subject><subject>Object recognition</subject><subject>Training</subject><subject>Transformations (mathematics)</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>BENPR</sourceid><recordid>eNqNirEKwjAUAIMgWLT_EHAuxFdrgluwioPgInQs0b5qSk1qkvr9RvADnA7ubkISyPNVJtYAM5J63zHGYMOhKPKEnGQlS7mlMgQ0QVuTVajvj4ANlc0bnVdOq56W9qm0iUoNQX032lpHz9cOb4GWGCKiXJBpq3qP6Y9zsjzsL7tjNjj7GtGHurOjMzHVwJkQnIOA_L_rA9QiPMs</recordid><startdate>20220831</startdate><enddate>20220831</enddate><creator>Menke, Maximilian</creator><creator>Wenzel, Thomas</creator><creator>Schwung, Andreas</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20220831</creationdate><title>AWADA: Attention-Weighted Adversarial Domain Adaptation for Object Detection</title><author>Menke, Maximilian ; Wenzel, Thomas ; Schwung, Andreas</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_27088772823</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Ablation</topic><topic>Adaptation</topic><topic>Data sources</topic><topic>Domains</topic><topic>Feedback loops</topic><topic>Object recognition</topic><topic>Training</topic><topic>Transformations (mathematics)</topic><toplevel>online_resources</toplevel><creatorcontrib>Menke, Maximilian</creatorcontrib><creatorcontrib>Wenzel, Thomas</creatorcontrib><creatorcontrib>Schwung, Andreas</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Menke, Maximilian</au><au>Wenzel, Thomas</au><au>Schwung, Andreas</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>AWADA: Attention-Weighted Adversarial Domain Adaptation for Object Detection</atitle><jtitle>arXiv.org</jtitle><date>2022-08-31</date><risdate>2022</risdate><eissn>2331-8422</eissn><abstract>Object detection networks have reached an impressive performance level, yet a lack of suitable data in specific applications often limits it in practice. Typically, additional data sources are utilized to support the training task. In these, however, domain gaps between different data sources pose a challenge in deep learning. GAN-based image-to-image style-transfer is commonly applied to shrink the domain gap, but is unstable and decoupled from the object detection task. We propose AWADA, an Attention-Weighted Adversarial Domain Adaptation framework for creating a feedback loop between style-transformation and detection task. By constructing foreground object attention maps from object detector proposals, we focus the transformation on foreground object regions and stabilize style-transfer training. In extensive experiments and ablation studies, we show that AWADA reaches state-of-the-art unsupervised domain adaptation object detection performance in the commonly used benchmarks for tasks such as synthetic-to-real, adverse weather and cross-camera adaptation.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2022-08 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_2708877282 |
source | Free E- Journals |
subjects | Ablation Adaptation Data sources Domains Feedback loops Object recognition Training Transformations (mathematics) |
title | AWADA: Attention-Weighted Adversarial Domain Adaptation for Object Detection |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-27T01%3A24%3A59IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=AWADA:%20Attention-Weighted%20Adversarial%20Domain%20Adaptation%20for%20Object%20Detection&rft.jtitle=arXiv.org&rft.au=Menke,%20Maximilian&rft.date=2022-08-31&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2708877282%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2708877282&rft_id=info:pmid/&rfr_iscdi=true |