Foresee What You Will Learn: Data Augmentation for Domain Generalization in Non-stationary Environment
Existing domain generalization aims to learn a generalizable model to perform well even on unseen domains. For many real-world machine learning applications, the data distribution often shifts gradually along domain indices. For example, a self-driving car with a vision system drives from dawn to du...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Existing domain generalization aims to learn a generalizable model to perform
well even on unseen domains. For many real-world machine learning applications,
the data distribution often shifts gradually along domain indices. For example,
a self-driving car with a vision system drives from dawn to dusk, with the sky
darkening gradually. Therefore, the system must be able to adapt to changes in
ambient illumination and continue to drive safely on the road. In this paper,
we formulate such problems as Evolving Domain Generalization, where a model
aims to generalize well on a target domain by discovering and leveraging the
evolving pattern of the environment. We then propose Directional Domain
Augmentation (DDA), which simulates the unseen target features by mapping
source data as augmentations through a domain transformer. Specifically, we
formulate DDA as a bi-level optimization problem and solve it through a novel
meta-learning approach in the representation space. We evaluate the proposed
method on both synthetic datasets and realworld datasets, and empirical results
show that our approach can outperform other existing methods. |
---|---|
DOI: | 10.48550/arxiv.2301.07845 |