iFAN: Image-Instance Full Alignment Networks for Adaptive Object Detection
Training an object detector on a data-rich domain and applying it to a data-poor one with limited performance drop is highly attractive in industry, because it saves huge annotation cost. Recent research on unsupervised domain adaptive object detection has verified that aligning data distributions b...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Training an object detector on a data-rich domain and applying it to a
data-poor one with limited performance drop is highly attractive in industry,
because it saves huge annotation cost. Recent research on unsupervised domain
adaptive object detection has verified that aligning data distributions between
source and target images through adversarial learning is very useful. The key
is when, where and how to use it to achieve best practice. We propose
Image-Instance Full Alignment Networks (iFAN) to tackle this problem by
precisely aligning feature distributions on both image and instance levels: 1)
Image-level alignment: multi-scale features are roughly aligned by training
adversarial domain classifiers in a hierarchically-nested fashion. 2) Full
instance-level alignment: deep semantic information and elaborate instance
representations are fully exploited to establish a strong relationship among
categories and domains. Establishing these correlations is formulated as a
metric learning problem by carefully constructing instance pairs.
Above-mentioned adaptations can be integrated into an object detector (e.g.
Faster RCNN), resulting in an end-to-end trainable framework where multiple
alignments can work collaboratively in a coarse-tofine manner. In two domain
adaptation tasks: synthetic-to-real (SIM10K->Cityscapes) and normal-to-foggy
weather (Cityscapes->Foggy Cityscapes), iFAN outperforms the state-of-the-art
methods with a boost of 10%+ AP over the source-only baseline. |
---|---|
DOI: | 10.48550/arxiv.2003.04132 |