fugashi, a Tool for Tokenizing Japanese in Python
Recent years have seen an increase in the number of large-scale multilingual NLP projects. However, even in such projects, languages with special processing requirements are often excluded. One such language is Japanese. Japanese is written without spaces, tokenization is non-trivial, and while high...
Gespeichert in:
1. Verfasser: | |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Recent years have seen an increase in the number of large-scale multilingual
NLP projects. However, even in such projects, languages with special processing
requirements are often excluded. One such language is Japanese. Japanese is
written without spaces, tokenization is non-trivial, and while high quality
open source tokenizers exist they can be hard to use and lack English
documentation. This paper introduces fugashi, a MeCab wrapper for Python, and
gives an introduction to tokenizing Japanese. |
---|---|
DOI: | 10.48550/arxiv.2010.06858 |