SpeechLMScore: Evaluating speech generation using speech language model
While human evaluation is the most reliable metric for evaluating speech generation systems, it is generally costly and time-consuming. Previous studies on automatic speech quality assessment address the problem by predicting human evaluation scores with machine learning models. However, they rely o...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | While human evaluation is the most reliable metric for evaluating speech
generation systems, it is generally costly and time-consuming. Previous studies
on automatic speech quality assessment address the problem by predicting human
evaluation scores with machine learning models. However, they rely on
supervised learning and thus suffer from high annotation costs and domain-shift
problems. We propose SpeechLMScore, an unsupervised metric to evaluate
generated speech using a speech-language model. SpeechLMScore computes the
average log-probability of a speech signal by mapping it into discrete tokens
and measures the average probability of generating the sequence of tokens.
Therefore, it does not require human annotation and is a highly scalable
framework. Evaluation results demonstrate that the proposed metric shows a
promising correlation with human evaluation scores on different speech
generation tasks including voice conversion, text-to-speech, and speech
enhancement. |
---|---|
DOI: | 10.48550/arxiv.2212.04559 |