CatVRNN: Generating category texts via multi-task learning

Controlling the model to generate texts of different categories is a challenging task that is receiving increasing attention. Recently, generative adversarial networks (GANs) have shown promising results for category text generation. However, the texts generated by GANs usually suffer from problems...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Knowledge-based systems 2022-05, Vol.244, p.108491, Article 108491
Hauptverfasser: Cheng, Pengsen, Dai, Jinqiao, Liu, Jiayong
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Controlling the model to generate texts of different categories is a challenging task that is receiving increasing attention. Recently, generative adversarial networks (GANs) have shown promising results for category text generation. However, the texts generated by GANs usually suffer from problems of mode collapse and training instability. To avoid the above problems, in this study, inspired by multi-task learning, a novel model called category-aware variational recurrent neural network (CatVRNN) is proposed. In this model, generation and classification tasks are trained simultaneously to generate texts of different categories. The use of multi-task learning can improve the quality of the generated texts, when the classification task is appropriate. In addition, a function is proposed to initialize the hidden state of the CatVRNN to force the model to generate texts of a specific category. Experimental results on three datasets demonstrate that the model can outperform state-of-the-art text generation methods based on GAN in terms of diversity of generated texts.
ISSN:0950-7051
1872-7409
DOI:10.1016/j.knosys.2022.108491