Training a Vision Transformer from scratch in less than 24 hours with 1 GPU
Transformers have become central to recent advances in computer vision. However, training a vision Transformer (ViT) model from scratch can be resource intensive and time consuming. In this paper, we aim to explore approaches to reduce the training costs of ViT models. We introduce some algorithmic...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Transformers have become central to recent advances in computer vision.
However, training a vision Transformer (ViT) model from scratch can be resource
intensive and time consuming. In this paper, we aim to explore approaches to
reduce the training costs of ViT models. We introduce some algorithmic
improvements to enable training a ViT model from scratch with limited hardware
(1 GPU) and time (24 hours) resources. First, we propose an efficient approach
to add locality to the ViT architecture. Second, we develop a new image size
curriculum learning strategy, which allows to reduce the number of patches
extracted from each image at the beginning of the training. Finally, we propose
a new variant of the popular ImageNet1k benchmark by adding hardware and time
constraints. We evaluate our contributions on this benchmark, and show they can
significantly improve performances given the proposed training budget. We will
share the code in https://github.com/BorealisAI/efficient-vit-training. |
---|---|
DOI: | 10.48550/arxiv.2211.05187 |