Audio style unification method based on generative adversarial network

The invention discloses an audio style unification method based on a generative adversarial network. The method comprises the following steps: step 1, acquiring an initial data set and a noise data set; step 2, preprocessing the initial data set and the noise data set, generating a noise mixed audio...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: YANG ZHIJUN, XIE HUILONG, OUYANG TONGJIE, HU TIANLIN
Format: Patent
Sprache:chi ; eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:The invention discloses an audio style unification method based on a generative adversarial network. The method comprises the following steps: step 1, acquiring an initial data set and a noise data set; step 2, preprocessing the initial data set and the noise data set, generating a noise mixed audio and a style template audio, and determining a training data set and a test data set related to the noise mixed audio and the style template audio; step 3, building a generative network model, training a generator network G for unifying audio styles, inputting the noise mixed audio and the style template audio, and outputting an audio of a target style and a frequency spectrum of the target style; step 4, building a discrimination network model, and training a discriminator network D to measure the similarity between the frequency spectrum of the target style output by the generator and the frequency spectrum of the style template; and step 5, constructing a loss function model and training the generative adversari