SentencePiece
Year: 2,018
Journal: Conference on Empirical Methods in Natural Language Processing
Languages: All Languages
Programming languages: C, Python
Input data:
Plain text
Output data:
tokens
Project website: https://github.com/google/sentencepiece
SentencePiece is an unsupervised text tokenizer and detokenizer mainly for Neural Network-based text generation systems where the vocabulary size is predetermined prior to the neural model training. SentencePiece implements subword units (e.g., byte-pair-encoding (BPE) and unigram language model) with the extension of direct training from raw sentences. SentencePiece allows us to make a purely end-to-end system that does not depend on language-specific pre/postprocessing.