Decentralized Attribution of Generative Models
Growing applications of generative models have led to new threats such as malicious personation and digital copyright infringement. One solution to these threats is model attribution, i.e., the identification of user-end models where the contents under question are generated from. Existing studies s...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Growing applications of generative models have led to new threats such as
malicious personation and digital copyright infringement. One solution to these
threats is model attribution, i.e., the identification of user-end models where
the contents under question are generated from. Existing studies showed
empirical feasibility of attribution through a centralized classifier trained
on all user-end models. However, this approach is not scalable in reality as
the number of models ever grows. Neither does it provide an attributability
guarantee. To this end, this paper studies decentralized attribution, which
relies on binary classifiers associated with each user-end model. Each binary
classifier is parameterized by a user-specific key and distinguishes its
associated model distribution from the authentic data distribution. We develop
sufficient conditions of the keys that guarantee an attributability lower
bound. Our method is validated on MNIST, CelebA, and FFHQ datasets. We also
examine the trade-off between generation quality and robustness of attribution
against adversarial post-processes. |
---|---|
DOI: | 10.48550/arxiv.2010.13974 |