Diffusion Models as Masked Audio-Video Learners

Over the past several years, the synchronization between audio and visual signals has been leveraged to learn richer audio-visual representations. Aided by the large availability of unlabeled videos, many unsupervised training frameworks have demonstrated impressive results in various downstream aud...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Nunez, Elvis, Jin, Yanzi, Rastegari, Mohammad, Mehta, Sachin, Horton, Maxwell
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Nunez, Elvis
Jin, Yanzi
Rastegari, Mohammad
Mehta, Sachin
Horton, Maxwell
description Over the past several years, the synchronization between audio and visual signals has been leveraged to learn richer audio-visual representations. Aided by the large availability of unlabeled videos, many unsupervised training frameworks have demonstrated impressive results in various downstream audio and video tasks. Recently, Masked Audio-Video Learners (MAViL) has emerged as a state-of-the-art audio-video pre-training framework. MAViL couples contrastive learning with masked autoencoding to jointly reconstruct audio spectrograms and video frames by fusing information from both modalities. In this paper, we study the potential synergy between diffusion models and MAViL, seeking to derive mutual benefits from these two frameworks. The incorporation of diffusion into MAViL, combined with various training efficiency methodologies that include the utilization of a masking ratio curriculum and adaptive batch sizing, results in a notable 32% reduction in pre-training Floating-Point Operations (FLOPS) and an 18% decrease in pre-training wall clock time. Crucially, this enhanced efficiency does not compromise the model's performance in downstream audio-classification tasks when compared to MAViL's performance.
doi_str_mv 10.48550/arxiv.2310.03937
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2310_03937</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2310_03937</sourcerecordid><originalsourceid>FETCH-LOGICAL-a677-3f2d798f4b62780b8e9d4fc5dea88d90d961ba52a921c334b0a03d0db9efd9863</originalsourceid><addsrcrecordid>eNotzr1uwjAUhmEvDAh6AUz1DQQcnyS2RwT9kxKxoK7Rcc6xZBEIskXV3n1b2umT3uHTI8SqVOvK1rXaYPqMH2sNP0GBAzMXm30M4ZbjdJHdRDxmiVl2mE9McnujOBXvkXiSLWO6cMpLMQs4Zn7434U4Pj8dd69Fe3h5223bAhtjCgiajLOh8o02VnnLjqow1MRoLTlFrik91hqdLgeAyitUQIq840DONrAQj3-3d3F_TfGM6av_lfd3OXwDneU9Sw</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Diffusion Models as Masked Audio-Video Learners</title><source>arXiv.org</source><creator>Nunez, Elvis ; Jin, Yanzi ; Rastegari, Mohammad ; Mehta, Sachin ; Horton, Maxwell</creator><creatorcontrib>Nunez, Elvis ; Jin, Yanzi ; Rastegari, Mohammad ; Mehta, Sachin ; Horton, Maxwell</creatorcontrib><description>Over the past several years, the synchronization between audio and visual signals has been leveraged to learn richer audio-visual representations. Aided by the large availability of unlabeled videos, many unsupervised training frameworks have demonstrated impressive results in various downstream audio and video tasks. Recently, Masked Audio-Video Learners (MAViL) has emerged as a state-of-the-art audio-video pre-training framework. MAViL couples contrastive learning with masked autoencoding to jointly reconstruct audio spectrograms and video frames by fusing information from both modalities. In this paper, we study the potential synergy between diffusion models and MAViL, seeking to derive mutual benefits from these two frameworks. The incorporation of diffusion into MAViL, combined with various training efficiency methodologies that include the utilization of a masking ratio curriculum and adaptive batch sizing, results in a notable 32% reduction in pre-training Floating-Point Operations (FLOPS) and an 18% decrease in pre-training wall clock time. Crucially, this enhanced efficiency does not compromise the model's performance in downstream audio-classification tasks when compared to MAViL's performance.</description><identifier>DOI: 10.48550/arxiv.2310.03937</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition ; Computer Science - Multimedia ; Computer Science - Sound</subject><creationdate>2023-10</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2310.03937$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2310.03937$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Nunez, Elvis</creatorcontrib><creatorcontrib>Jin, Yanzi</creatorcontrib><creatorcontrib>Rastegari, Mohammad</creatorcontrib><creatorcontrib>Mehta, Sachin</creatorcontrib><creatorcontrib>Horton, Maxwell</creatorcontrib><title>Diffusion Models as Masked Audio-Video Learners</title><description>Over the past several years, the synchronization between audio and visual signals has been leveraged to learn richer audio-visual representations. Aided by the large availability of unlabeled videos, many unsupervised training frameworks have demonstrated impressive results in various downstream audio and video tasks. Recently, Masked Audio-Video Learners (MAViL) has emerged as a state-of-the-art audio-video pre-training framework. MAViL couples contrastive learning with masked autoencoding to jointly reconstruct audio spectrograms and video frames by fusing information from both modalities. In this paper, we study the potential synergy between diffusion models and MAViL, seeking to derive mutual benefits from these two frameworks. The incorporation of diffusion into MAViL, combined with various training efficiency methodologies that include the utilization of a masking ratio curriculum and adaptive batch sizing, results in a notable 32% reduction in pre-training Floating-Point Operations (FLOPS) and an 18% decrease in pre-training wall clock time. Crucially, this enhanced efficiency does not compromise the model's performance in downstream audio-classification tasks when compared to MAViL's performance.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><subject>Computer Science - Multimedia</subject><subject>Computer Science - Sound</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotzr1uwjAUhmEvDAh6AUz1DQQcnyS2RwT9kxKxoK7Rcc6xZBEIskXV3n1b2umT3uHTI8SqVOvK1rXaYPqMH2sNP0GBAzMXm30M4ZbjdJHdRDxmiVl2mE9McnujOBXvkXiSLWO6cMpLMQs4Zn7434U4Pj8dd69Fe3h5223bAhtjCgiajLOh8o02VnnLjqow1MRoLTlFrik91hqdLgeAyitUQIq840DONrAQj3-3d3F_TfGM6av_lfd3OXwDneU9Sw</recordid><startdate>20231005</startdate><enddate>20231005</enddate><creator>Nunez, Elvis</creator><creator>Jin, Yanzi</creator><creator>Rastegari, Mohammad</creator><creator>Mehta, Sachin</creator><creator>Horton, Maxwell</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20231005</creationdate><title>Diffusion Models as Masked Audio-Video Learners</title><author>Nunez, Elvis ; Jin, Yanzi ; Rastegari, Mohammad ; Mehta, Sachin ; Horton, Maxwell</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a677-3f2d798f4b62780b8e9d4fc5dea88d90d961ba52a921c334b0a03d0db9efd9863</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><topic>Computer Science - Multimedia</topic><topic>Computer Science - Sound</topic><toplevel>online_resources</toplevel><creatorcontrib>Nunez, Elvis</creatorcontrib><creatorcontrib>Jin, Yanzi</creatorcontrib><creatorcontrib>Rastegari, Mohammad</creatorcontrib><creatorcontrib>Mehta, Sachin</creatorcontrib><creatorcontrib>Horton, Maxwell</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Nunez, Elvis</au><au>Jin, Yanzi</au><au>Rastegari, Mohammad</au><au>Mehta, Sachin</au><au>Horton, Maxwell</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Diffusion Models as Masked Audio-Video Learners</atitle><date>2023-10-05</date><risdate>2023</risdate><abstract>Over the past several years, the synchronization between audio and visual signals has been leveraged to learn richer audio-visual representations. Aided by the large availability of unlabeled videos, many unsupervised training frameworks have demonstrated impressive results in various downstream audio and video tasks. Recently, Masked Audio-Video Learners (MAViL) has emerged as a state-of-the-art audio-video pre-training framework. MAViL couples contrastive learning with masked autoencoding to jointly reconstruct audio spectrograms and video frames by fusing information from both modalities. In this paper, we study the potential synergy between diffusion models and MAViL, seeking to derive mutual benefits from these two frameworks. The incorporation of diffusion into MAViL, combined with various training efficiency methodologies that include the utilization of a masking ratio curriculum and adaptive batch sizing, results in a notable 32% reduction in pre-training Floating-Point Operations (FLOPS) and an 18% decrease in pre-training wall clock time. Crucially, this enhanced efficiency does not compromise the model's performance in downstream audio-classification tasks when compared to MAViL's performance.</abstract><doi>10.48550/arxiv.2310.03937</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2310.03937
ispartof
issn
language eng
recordid cdi_arxiv_primary_2310_03937
source arXiv.org
subjects Computer Science - Computer Vision and Pattern Recognition
Computer Science - Multimedia
Computer Science - Sound
title Diffusion Models as Masked Audio-Video Learners
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-08T08%3A15%3A43IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Diffusion%20Models%20as%20Masked%20Audio-Video%20Learners&rft.au=Nunez,%20Elvis&rft.date=2023-10-05&rft_id=info:doi/10.48550/arxiv.2310.03937&rft_dat=%3Carxiv_GOX%3E2310_03937%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true