Advancing Speech Language Models by Scaling Supervised Fine-Tuning with Over 60,000 Hours of Synthetic Speech Dialogue Data
The GPT-4o represents a significant milestone in enabling real-time interaction with large language models (LLMs) through speech, its remarkable low latency and high fluency not only capture attention but also stimulate research interest in the field. This real-time speech interaction is particularl...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The GPT-4o represents a significant milestone in enabling real-time
interaction with large language models (LLMs) through speech, its remarkable
low latency and high fluency not only capture attention but also stimulate
research interest in the field. This real-time speech interaction is
particularly valuable in scenarios requiring rapid feedback and immediate
responses, dramatically enhancing user experience. However, there is a notable
lack of research focused on real-time large speech language models,
particularly for Chinese. In this work, we present KE-Omni, a seamless large
speech language model built upon Ke-SpeechChat, a large-scale high-quality
synthetic speech interaction dataset consisting of 7 million Chinese and
English conversations, featuring 42,002 speakers, and totaling over 60,000
hours, This contributes significantly to the advancement of research and
development in this field. The demos can be accessed at
\url{https://huggingface.co/spaces/KE-Team/KE-Omni}. |
---|---|
DOI: | 10.48550/arxiv.2412.01078 |