Learning Robust Representations Of Generative Models Using Set-Based Artificial Fingerprints
With recent progress in deep generative models, the problem of identifying synthetic data and comparing their underlying generative processes has become an imperative task for various reasons, including fighting visual misinformation and source attribution. Existing methods often approximate the dis...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | With recent progress in deep generative models, the problem of identifying
synthetic data and comparing their underlying generative processes has become
an imperative task for various reasons, including fighting visual
misinformation and source attribution. Existing methods often approximate the
distance between the models via their sample distributions. In this paper, we
approach the problem of fingerprinting generative models by learning
representations that encode the residual artifacts left by the generative
models as unique signals that identify the source models. We consider these
unique traces (a.k.a. "artificial fingerprints") as representations of
generative models, and demonstrate their usefulness in both the discriminative
task of source attribution and the unsupervised task of defining a similarity
between the underlying models. We first extend the existing studies on
fingerprints of GANs to four representative classes of generative models (VAEs,
Flows, GANs and score-based models), and demonstrate their existence and
attributability. We then improve the stability and attributability of the
fingerprints by proposing a new learning method based on set-encoding and
contrastive training. Our set-encoder, unlike existing methods that operate on
individual images, learns fingerprints from a \textit{set} of images. We
demonstrate improvements in the stability and attributability through
comparisons to state-of-the-art fingerprint methods and ablation studies.
Further, our method employs contrastive training to learn an implicit
similarity between models. We discover latent families of generative models
using this metric in a standard hierarchical clustering algorithm. |
---|---|
DOI: | 10.48550/arxiv.2206.02067 |