Audio Entailment: Assessing Deductive Reasoning for Audio Understanding
Recent literature uses language to build foundation models for audio. These Audio-Language Models (ALMs) are trained on a vast number of audio-text pairs and show remarkable performance in tasks including Text-to-Audio Retrieval, Captioning, and Question Answering. However, their ability to engage i...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Recent literature uses language to build foundation models for audio. These
Audio-Language Models (ALMs) are trained on a vast number of audio-text pairs
and show remarkable performance in tasks including Text-to-Audio Retrieval,
Captioning, and Question Answering. However, their ability to engage in more
complex open-ended tasks, like Interactive Question-Answering, requires
proficiency in logical reasoning -- a skill not yet benchmarked. We introduce
the novel task of Audio Entailment to evaluate an ALM's deductive reasoning
ability. This task assesses whether a text description (hypothesis) of audio
content can be deduced from an audio recording (premise), with potential
conclusions being entailment, neutral, or contradiction, depending on the
sufficiency of the evidence. We create two datasets for this task with audio
recordings sourced from two audio captioning datasets -- AudioCaps and Clotho
-- and hypotheses generated using Large Language Models (LLMs). We benchmark
state-of-the-art ALMs and find deficiencies in logical reasoning with both
zero-shot and linear probe evaluations. Finally, we propose
"caption-before-reason", an intermediate step of captioning that improves the
zero-shot and linear-probe performance of ALMs by an absolute 6% and 3%,
respectively. |
---|---|
DOI: | 10.48550/arxiv.2407.18062 |