Kernel Modulation: A Parameter-Efficient Method for Training Convolutional Neural Networks
Deep Neural Networks, particularly Convolutional Neural Networks (ConvNets), have achieved incredible success in many vision tasks, but they usually require millions of parameters for good accuracy performance. With increasing applications that use ConvNets, updating hundreds of networks for multipl...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Deep Neural Networks, particularly Convolutional Neural Networks (ConvNets),
have achieved incredible success in many vision tasks, but they usually require
millions of parameters for good accuracy performance. With increasing
applications that use ConvNets, updating hundreds of networks for multiple
tasks on an embedded device can be costly in terms of memory, bandwidth, and
energy. Approaches to reduce this cost include model compression and
parameter-efficient models that adapt a subset of network layers for each new
task. This work proposes a novel parameter-efficient kernel modulation (KM)
method that adapts all parameters of a base network instead of a subset of
layers. KM uses lightweight task-specialized kernel modulators that require
only an additional 1.4% of the base network parameters. With multiple tasks,
only the task-specialized KM weights are communicated and stored on the
end-user device. We applied this method in training ConvNets for Transfer
Learning and Meta-Learning scenarios. Our results show that KM delivers up to
9% higher accuracy than other parameter-efficient methods on the Transfer
Learning benchmark. |
---|---|
DOI: | 10.48550/arxiv.2203.15297 |