Few-Shot Adaptation of Pre-Trained Networks for Domain Shift
Deep networks are prone to performance degradation when there is a domain shift between the source (training) data and target (test) data. Recent test-time adaptation methods update batch normalization layers of pre-trained source models deployed in new target environments with streaming data to mit...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Deep networks are prone to performance degradation when there is a domain
shift between the source (training) data and target (test) data. Recent
test-time adaptation methods update batch normalization layers of pre-trained
source models deployed in new target environments with streaming data to
mitigate such performance degradation. Although such methods can adapt
on-the-fly without first collecting a large target domain dataset, their
performance is dependent on streaming conditions such as mini-batch size and
class-distribution, which can be unpredictable in practice. In this work, we
propose a framework for few-shot domain adaptation to address the practical
challenges of data-efficient adaptation. Specifically, we propose a constrained
optimization of feature normalization statistics in pre-trained source models
supervised by a small support set from the target domain. Our method is easy to
implement and improves source model performance with as few as one sample per
class for classification tasks. Extensive experiments on 5 cross-domain
classification and 4 semantic segmentation datasets show that our method
achieves more accurate and reliable performance than test-time adaptation,
while not being constrained by streaming conditions. |
---|---|
DOI: | 10.48550/arxiv.2205.15234 |