Making compression algorithms for Unicode text
The majority of online content is written in languages other than English, and is most commonly encoded in UTF-8, the world's dominant Unicode character encoding. Traditional compression algorithms typically operate on individual bytes. While this approach works well for the single-byte ASCII e...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The majority of online content is written in languages other than English,
and is most commonly encoded in UTF-8, the world's dominant Unicode character
encoding. Traditional compression algorithms typically operate on individual
bytes. While this approach works well for the single-byte ASCII encoding, it
works poorly for UTF-8, where characters often span multiple bytes. Previous
research has focused on developing Unicode compressors from scratch, which
often failed to outperform established algorithms such as bzip2. We develop a
technique to modify byte-based compressors to operate directly on Unicode
characters, and implement variants of LZW and PPM that apply this technique. We
find that our method substantially improves compression effectiveness on a
UTF-8 corpus, with our PPM variant outperforming the state-of-the-art PPMII
compressor. On ASCII and binary files, our variants perform similarly to the
original unmodified compressors. |
---|---|
DOI: | 10.48550/arxiv.1701.04047 |