Training Audio Captioning Models without Audio
Automated Audio Captioning (AAC) is the task of generating natural language descriptions given an audio stream. A typical AAC system requires manually curated training data of audio segments and corresponding text caption annotations. The creation of these audio-caption pairs is costly, resulting in...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Automated Audio Captioning (AAC) is the task of generating natural language
descriptions given an audio stream. A typical AAC system requires manually
curated training data of audio segments and corresponding text caption
annotations. The creation of these audio-caption pairs is costly, resulting in
general data scarcity for the task. In this work, we address this major
limitation and propose an approach to train AAC systems using only text. Our
approach leverages the multimodal space of contrastively trained audio-text
models, such as CLAP. During training, a decoder generates captions conditioned
on the pretrained CLAP text encoder. During inference, the text encoder is
replaced with the pretrained CLAP audio encoder. To bridge the modality gap
between text and audio embeddings, we propose the use of noise injection or a
learnable adapter, during training. We find that the proposed text-only
framework performs competitively with state-of-the-art models trained with
paired audio, showing that efficient text-to-audio transfer is possible.
Finally, we showcase both stylized audio captioning and caption enrichment
while training without audio or human-created text captions. |
---|---|
DOI: | 10.48550/arxiv.2309.07372 |