Diffusion Models as Masked Audio-Video Learners
Over the past several years, the synchronization between audio and visual signals has been leveraged to learn richer audio-visual representations. Aided by the large availability of unlabeled videos, many unsupervised training frameworks have demonstrated impressive results in various downstream aud...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Over the past several years, the synchronization between audio and visual
signals has been leveraged to learn richer audio-visual representations. Aided
by the large availability of unlabeled videos, many unsupervised training
frameworks have demonstrated impressive results in various downstream audio and
video tasks. Recently, Masked Audio-Video Learners (MAViL) has emerged as a
state-of-the-art audio-video pre-training framework. MAViL couples contrastive
learning with masked autoencoding to jointly reconstruct audio spectrograms and
video frames by fusing information from both modalities. In this paper, we
study the potential synergy between diffusion models and MAViL, seeking to
derive mutual benefits from these two frameworks. The incorporation of
diffusion into MAViL, combined with various training efficiency methodologies
that include the utilization of a masking ratio curriculum and adaptive batch
sizing, results in a notable 32% reduction in pre-training Floating-Point
Operations (FLOPS) and an 18% decrease in pre-training wall clock time.
Crucially, this enhanced efficiency does not compromise the model's performance
in downstream audio-classification tasks when compared to MAViL's performance. |
---|---|
DOI: | 10.48550/arxiv.2310.03937 |