Estimating and comparing entropies across written natural languages using PPM compression

Summary form only given. The measurement of the entropy of written English is extended to include the following written natural languages: Arabic, Chinese, French, Japanese, Korean, Russian, and Spanish. It was observed that translations of the same document have approximately the same size when com...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Behr, F., Fossum, V., Mitzenmacher, M., Xiao, D.
Format: Tagungsbericht
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Summary form only given. The measurement of the entropy of written English is extended to include the following written natural languages: Arabic, Chinese, French, Japanese, Korean, Russian, and Spanish. It was observed that translations of the same document have approximately the same size when compressed even though they have widely varying uncompressed sizes. In the experiment, an efficient compression algorithm was used. It utilized PPMD+, PPMZ, and BZIP2 to compress the given texts and compare the resulting sizes. Similar experiments with machine translations were also performed. Based on the findings, it suggests that compression can be used as a tool to find poor translations. The results of these experiments, while preliminary, support the hypothesis that translation preserves information content. This analysis opens new horizons for future research concerning the relationship between compression and translation.
ISSN:1068-0314
2375-0359
DOI:10.1109/DCC.2003.1194035