Natural Language Supervision for General-Purpose Audio Representations
Audio-Language models jointly learn multimodal text and audio representations that enable Zero-Shot inference. Models rely on the encoders to create powerful representations of the input and generalize to multiple tasks ranging from sounds, music, and speech. Although models have achieved remarkable...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Audio-Language models jointly learn multimodal text and audio representations
that enable Zero-Shot inference. Models rely on the encoders to create powerful
representations of the input and generalize to multiple tasks ranging from
sounds, music, and speech. Although models have achieved remarkable
performance, there is still a performance gap with task-specific models. In
this paper, we propose a Contrastive Language-Audio Pretraining model that is
pretrained with a diverse collection of 4.6M audio-text pairs employing two
innovative encoders for Zero-Shot inference. To learn audio representations, we
trained an audio encoder on 22 audio tasks, instead of the standard training of
sound event classification. To learn language representations, we trained an
autoregressive decoder-only model instead of the standard encoder-only models.
Then, the audio and language representations are brought into a joint
multimodal space using Contrastive Learning. We used our encoders to improve
the downstream performance by a margin. We extensively evaluated the
generalization of our representations on 26 downstream tasks, the largest in
the literature. Our model achieves state of the art results in several tasks
leading the way towards general-purpose audio representations. |
---|---|
DOI: | 10.48550/arxiv.2309.05767 |