Modelling Chinese for text compression

Summary form only given. We have adapted the PPM model especially for Chinese text and achieve good compression results. We highlighted the importance of pre-processing work for Chinese, as unlike naturally segmented languages such as English, it is not clear what are the most appropriate symbols to...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Peiliang Wu, Teahan, W.J.
Format: Tagungsbericht
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Summary form only given. We have adapted the PPM model especially for Chinese text and achieve good compression results. We highlighted the importance of pre-processing work for Chinese, as unlike naturally segmented languages such as English, it is not clear what are the most appropriate symbols to use for encoding. We have developed a text compression corpus for Chinese text, and our experiments with this corpus show that the pre-processing work can improve the compression rate significantly. We made several changes in the PPM model to adapt specifically to the Chinese language. Changing the symbol encoding unit to 16 bits captures the structure of the language precisely. Sorting all the characters in context by frequency order improves the program speed significantly and using no exclusions also leads to faster execution speed. This new PPM-Ch model should also achieve similar improvements in other large alphabet size languages such as Japanese, Korean and Thai.
ISSN:1068-0314
2375-0359
DOI:10.1109/DCC.2005.54