Refign: Align and Refine for Adaptation of Semantic Segmentation to Adverse Conditions
Due to the scarcity of dense pixel-level semantic annotations for images recorded in adverse visual conditions, there has been a keen interest in unsupervised domain adaptation (UDA) for the semantic segmentation of such images. UDA adapts models trained on normal conditions to the target adverse-co...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Due to the scarcity of dense pixel-level semantic annotations for images
recorded in adverse visual conditions, there has been a keen interest in
unsupervised domain adaptation (UDA) for the semantic segmentation of such
images. UDA adapts models trained on normal conditions to the target
adverse-condition domains. Meanwhile, multiple datasets with driving scenes
provide corresponding images of the same scenes across multiple conditions,
which can serve as a form of weak supervision for domain adaptation. We propose
Refign, a generic extension to self-training-based UDA methods which leverages
these cross-domain correspondences. Refign consists of two steps: (1) aligning
the normal-condition image to the corresponding adverse-condition image using
an uncertainty-aware dense matching network, and (2) refining the adverse
prediction with the normal prediction using an adaptive label correction
mechanism. We design custom modules to streamline both steps and set the new
state of the art for domain-adaptive semantic segmentation on several
adverse-condition benchmarks, including ACDC and Dark Zurich. The approach
introduces no extra training parameters, minimal computational overhead --
during training only -- and can be used as a drop-in extension to improve any
given self-training-based UDA method. Code is available at
https://github.com/brdav/refign. |
---|---|
DOI: | 10.48550/arxiv.2207.06825 |