Research on automatic proofreading of Chinese text based on Transformer model
This paper proposes to apply Transformer model in the field of Chinese text automatic proofreading. Transformer model is different from traditional Seq2Seq model based on probability, statistics, rules or BiLSTM. This deep learning model improves the overall structure of Seq2Seq model to achieve aut...
Gespeichert in:
Veröffentlicht in: | Diànzǐ jìshù yīngyòng 2020-01, Vol.46 (1), p.30-33 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | chi |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | This paper proposes to apply Transformer model in the field of Chinese text automatic proofreading. Transformer model is different from traditional Seq2Seq model based on probability, statistics, rules or BiLSTM. This deep learning model improves the overall structure of Seq2Seq model to achieve automatic proofreading of Chinese text. By comparing different models with public data sets and using accuracy, recall rate and F1 value as evaluation indexes, the experimental results show that Transformer model has greatly improved proofreading performance compared with other models. |
---|---|
ISSN: | 0258-7998 |
DOI: | 10.16157/j.issn.0258-7998.191013 |