Preserving domain private information via mutual information maximization

Recent advances in unsupervised domain adaptation have shown that mitigating the domain divergence by extracting the domain-invariant features could significantly improve the generalization of a model with respect to a new data domain. However, current methodologies often neglect to retain domain pr...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Neural networks 2024-04, Vol.172, p.106112-106112, Article 106112
Hauptverfasser: Chen, Jiahong, Wang, Jing, Lin, Weipeng, Zhang, Kuangen, de Silva, Clarence W.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Recent advances in unsupervised domain adaptation have shown that mitigating the domain divergence by extracting the domain-invariant features could significantly improve the generalization of a model with respect to a new data domain. However, current methodologies often neglect to retain domain private information, which is the unique information inherent to the unlabeled new domain, compromising generalization. This paper presents a novel method that utilizes mutual information to protect this domain-specific information, ensuring that the latent features of the unlabeled data not only remain domain-invariant but also reflect the unique statistics of the unlabeled domain. We show that simultaneous maximization of mutual information and reduction of domain divergence can effectively preserve domain-private information. We further illustrate that a neural estimator can aptly estimate the mutual information between the unlabeled input space and its latent feature space. Both theoretical analysis and empirical results validate the significance of preserving such unique information of the unlabeled domain for cross-domain generalization. Comparative evaluations reveal our method’s superiority over existing state-of-the-art techniques across multiple benchmark datasets.
ISSN:0893-6080
1879-2782
DOI:10.1016/j.neunet.2024.106112