Toucan: Token-Aware Character Level Language Modeling
Character-level language models obviate the need for separately trained tokenizers, but efficiency suffers from longer sequence lengths. Learning to combine character representations into tokens has made training these models more efficient, but they still require decoding characters individually. W...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Character-level language models obviate the need for separately trained
tokenizers, but efficiency suffers from longer sequence lengths. Learning to
combine character representations into tokens has made training these models
more efficient, but they still require decoding characters individually. We
propose Toucan, an augmentation to character-level models to make them
"token-aware". Comparing our method to prior work, we demonstrate significant
speed-ups in character generation without a loss in language modeling
performance. We then explore differences between our learned dynamic
tokenization of character sequences with popular fixed vocabulary solutions
such as Byte-Pair Encoding and WordPiece, finding our approach leads to a
greater amount of longer sequences tokenized as single items. Our project and
code are available at https://nlp.jhu.edu/nuggets/. |
---|---|
DOI: | 10.48550/arxiv.2311.08620 |