Quantifying Context Bias in Domain Adaptation for Object Detection
Domain adaptation for object detection (DAOD) aims to transfer a trained model from a source to a target domain. Various DAOD methods exist, some of which minimize context bias between foreground-background associations in various domains. However, no prior work has studied context bias in DAOD by a...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Domain adaptation for object detection (DAOD) aims to transfer a trained
model from a source to a target domain. Various DAOD methods exist, some of
which minimize context bias between foreground-background associations in
various domains. However, no prior work has studied context bias in DAOD by
analyzing changes in background features during adaptation and how context bias
is represented in different domains. Our research experiment highlights the
potential usability of context bias in DAOD. We address the problem by varying
activation values over different layers of trained models and by masking the
background, both of which impact the number and quality of detections. We then
use one synthetic dataset from CARLA and two different versions of real
open-source data, Cityscapes and Cityscapes foggy, as separate domains to
represent and quantify context bias. We utilize different metrics such as
Maximum Mean Discrepancy (MMD) and Maximum Variance Discrepancy (MVD) to find
the layer-specific conditional probability estimates of foreground given
manipulated background regions for separate domains. We demonstrate through
detailed analysis that understanding of the context bias can affect DAOD
approach and foc |
---|---|
DOI: | 10.48550/arxiv.2409.14679 |