Masking Augmentation for Supervised Learning
Pre-training using random masking has emerged as a novel trend in training techniques. However, supervised learning faces a challenge in adopting masking augmentations, primarily due to unstable training. In this paper, we propose a novel way to involve masking augmentations dubbed Masked Sub-model...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Pre-training using random masking has emerged as a novel trend in training
techniques. However, supervised learning faces a challenge in adopting masking
augmentations, primarily due to unstable training. In this paper, we propose a
novel way to involve masking augmentations dubbed Masked Sub-model (MaskSub).
MaskSub consists of the main-model and sub-model; while the former enjoys
conventional training recipes, the latter leverages the benefit of strong
masking augmentations in training. MaskSub addresses the challenge by
mitigating adverse effects through a relaxed loss function similar to a
self-distillation loss. Our analysis shows that MaskSub improves performance,
with the training loss converging even faster than regular training, which
suggests our method facilitates training. We further validate MaskSub across
diverse training recipes and models, including DeiT-III, MAE fine-tuning, CLIP
fine-tuning, ResNet, and Swin Transformer. Our results show that MaskSub
consistently provides significant performance gains across all the cases.
MaskSub provides a practical and effective solution for introducing additional
regularization under various training recipes. Code available at
https://github.com/naver-ai/augsub |
---|---|
DOI: | 10.48550/arxiv.2306.11339 |