Don't Look Twice: Faster Video Transformers with Run-Length Tokenization
Transformers are slow to train on videos due to extremely large numbers of input tokens, even though many video tokens are repeated over time. Existing methods to remove such uninformative tokens either have significant overhead, negating any speedup, or require tuning for different datasets and exa...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Transformers are slow to train on videos due to extremely large numbers of
input tokens, even though many video tokens are repeated over time. Existing
methods to remove such uninformative tokens either have significant overhead,
negating any speedup, or require tuning for different datasets and examples. We
present Run-Length Tokenization (RLT), a simple approach to speed up video
transformers inspired by run-length encoding for data compression. RLT
efficiently finds and removes runs of patches that are repeated over time prior
to model inference, then replaces them with a single patch and a positional
encoding to represent the resulting token's new length. Our method is
content-aware, requiring no tuning for different datasets, and fast, incurring
negligible overhead. RLT yields a large speedup in training, reducing the
wall-clock time to fine-tune a video transformer by 30% while matching baseline
model performance. RLT also works without any training, increasing model
throughput by 35% with only 0.1% drop in accuracy. RLT speeds up training at 30
FPS by more than 100%, and on longer video datasets, can reduce the token count
by up to 80%. Our project page is at
https://rccchoudhury.github.io/projects/rlt/. |
---|---|
DOI: | 10.48550/arxiv.2411.05222 |