ClusT3: Information Invariant Test-Time Training
Deep Learning models have shown remarkable performance in a broad range of vision tasks. However, they are often vulnerable against domain shifts at test-time. Test-time training (TTT) methods have been developed in an attempt to mitigate these vulnerabilities, where a secondary task is solved at tr...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Deep Learning models have shown remarkable performance in a broad range of
vision tasks. However, they are often vulnerable against domain shifts at
test-time. Test-time training (TTT) methods have been developed in an attempt
to mitigate these vulnerabilities, where a secondary task is solved at training
time simultaneously with the main task, to be later used as an self-supervised
proxy task at test-time. In this work, we propose a novel unsupervised TTT
technique based on the maximization of Mutual Information between multi-scale
feature maps and a discrete latent representation, which can be integrated to
the standard training as an auxiliary clustering task. Experimental results
demonstrate competitive classification performance on different popular
test-time adaptation benchmarks. |
---|---|
DOI: | 10.48550/arxiv.2310.12345 |