Sample-specific Masks for Visual Reprogramming-based Prompting
Visual reprogramming (VR) is a prompting technique that aims to re-purpose a pre-trained model (e.g., a classifier on ImageNet) to target tasks (e.g., medical data prediction) by learning a small-scale pattern added into input images instead of tuning considerable parameters within the model. The lo...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Visual reprogramming (VR) is a prompting technique that aims to re-purpose a
pre-trained model (e.g., a classifier on ImageNet) to target tasks (e.g.,
medical data prediction) by learning a small-scale pattern added into input
images instead of tuning considerable parameters within the model. The location
of the pattern within input samples is usually determined by a pre-defined mask
shared across all samples. In this paper, we show that the shared mask
potentially limits VR's generalization and increases its approximation error
due to the lack of sample-level adaptation. Motivated by this finding, we
design a new framework for VR called sample-specific multi-channel masks (SMM).
Specifically, SMM employs a lightweight ConvNet and patch-wise interpolation to
generate sample-specific three-channel masks instead of a shared and
pre-defined mask. Since we generate different masks for individual samples, SMM
is theoretically shown to reduce approximation error for the target tasks
compared with existing state-of-the-art VR methods. We also empirically
demonstrate its performance gain on both ResNet and ViT. The success of SMM
further highlights the broader applicability of VR in leveraging the latent
knowledge of pre-trained models for various target tasks. Our code is available
at https://github.com/tmlr-group/SMM. |
---|---|
DOI: | 10.48550/arxiv.2406.03150 |