Comprehensive Audio Query Handling System with Integrated Expert Models and Contextual Understanding
This paper presents a comprehensive chatbot system designed to handle a wide range of audio-related queries by integrating multiple specialized audio processing models. The proposed system uses an intent classifier, trained on a diverse audio query dataset, to route queries about audio content to ex...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | This paper presents a comprehensive chatbot system designed to handle a wide
range of audio-related queries by integrating multiple specialized audio
processing models. The proposed system uses an intent classifier, trained on a
diverse audio query dataset, to route queries about audio content to expert
models such as Automatic Speech Recognition (ASR), Speaker Diarization, Music
Identification, and Text-to-Audio generation. A 3.8 B LLM model then takes
inputs from an Audio Context Detection (ACD) module extracting audio event
information from the audio and post processes text domain outputs from the
expert models to compute the final response to the user. We evaluated the
system on custom audio tasks and MMAU sound set benchmarks. The custom datasets
were motivated by target use cases not covered in industry benchmarks and
included ACD-timestamp-QA (Question Answering) as well as ACD-temporal-QA
datasets to evaluate timestamp and temporal reasoning questions, respectively.
First we determined that a BERT based Intent Classifier outperforms LLM-fewshot
intent classifier in routing queries. Experiments further show that our
approach significantly improves accuracy on some custom tasks compared to
state-of-the-art Large Audio Language Models and outperforms models in the 7B
parameter size range on the sound testset of the MMAU benchmark, thereby
offering an attractive option for on device deployment. |
---|---|
DOI: | 10.48550/arxiv.2412.03980 |