Look, Learn and Leverage (L$^3$): Mitigating Visual-Domain Shift and Discovering Intrinsic Relations via Symbolic Alignment
Modern deep learning models have demonstrated outstanding performance on discovering the underlying mechanisms when both visual appearance and intrinsic relations (e.g., causal structure) data are sufficient, such as Disentangled Representation Learning (DRL), Causal Representation Learning (CRL) an...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Modern deep learning models have demonstrated outstanding performance on
discovering the underlying mechanisms when both visual appearance and intrinsic
relations (e.g., causal structure) data are sufficient, such as Disentangled
Representation Learning (DRL), Causal Representation Learning (CRL) and Visual
Question Answering (VQA) methods. However, generalization ability of these
models is challenged when the visual domain shifts and the relations data is
absent during finetuning. To address this challenge, we propose a novel
learning framework, Look, Learn and Leverage (L$^3$), which decomposes the
learning process into three distinct phases and systematically utilize the
class-agnostic segmentation masks as the common symbolic space to align visual
domains. Thus, a relations discovery model can be trained on the source domain,
and when the visual domain shifts and the intrinsic relations are absent, the
pretrained relations discovery model can be directly reused and maintain a
satisfactory performance. Extensive performance evaluations are conducted on
three different tasks: DRL, CRL and VQA, and show outstanding results on all
three tasks, which reveals the advantages of L$^3$. |
---|---|
DOI: | 10.48550/arxiv.2408.17363 |