Piggyback GAN: Efficient Lifelong Learning for Image Conditioned Generation
Humans accumulate knowledge in a lifelong fashion. Modern deep neural networks, on the other hand, are susceptible to catastrophic forgetting: when adapted to perform new tasks, they often fail to preserve their performance on previously learned tasks. Given a sequence of tasks, a naive approach add...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Humans accumulate knowledge in a lifelong fashion. Modern deep neural
networks, on the other hand, are susceptible to catastrophic forgetting: when
adapted to perform new tasks, they often fail to preserve their performance on
previously learned tasks. Given a sequence of tasks, a naive approach
addressing catastrophic forgetting is to train a separate standalone model for
each task, which scales the total number of parameters drastically without
efficiently utilizing previous models. In contrast, we propose a parameter
efficient framework, Piggyback GAN, which learns the current task by building a
set of convolutional and deconvolutional filters that are factorized into
filters of the models trained on previous tasks. For the current task, our
model achieves high generation quality on par with a standalone model at a
lower number of parameters. For previous tasks, our model can also preserve
generation quality since the filters for previous tasks are not altered. We
validate Piggyback GAN on various image-conditioned generation tasks across
different domains, and provide qualitative and quantitative results to show
that the proposed approach can address catastrophic forgetting effectively and
efficiently. |
---|---|
DOI: | 10.48550/arxiv.2104.11939 |