Fast, consistent tokenization of natural language text
This is a package for converting natural language text into tokens. It includes tokenizers for shingled n-grams, skip n-grams, words, word stems, sentences, paragraphs, characters, shingled characters, lines, tweets, Penn Treebank, regular expressions, as well as functions for counting characters, words, and sentences, and a function for splitting longer texts into separate documents, each with the same number of words. The tokenizers have a consistent interface, and the package is built on the stringi
and Rcpp
packages for fast yet correct tokenization in UTF-8 encoding.
System | Target | Derivation | Build status |
---|---|---|---|
x86_64-linux | /gnu/store/lsbkg7j3y5kbbxfap5wcmp7snsi6banz-r-tokenizers-0.2.1.drv | ||
mips64el-linux | /gnu/store/49qya2q1v7566lckq1pv3wfk75fxcjdp-r-tokenizers-0.2.1.drv | ||
i686-linux | /gnu/store/w2r43lds079l9x8hbvh2239ad1b204lr-r-tokenizers-0.2.1.drv | ||
i586-gnu | /gnu/store/bfci0fgck3asqdyff3pz4cdc6g77v1am-r-tokenizers-0.2.1.drv | ||
armhf-linux | /gnu/store/fn097f0xxgyyf976a409f20wg52irg70-r-tokenizers-0.2.1.drv | ||
aarch64-linux | /gnu/store/afaizpc81zjhdxiak2pw9xjz789ndi7i-r-tokenizers-0.2.1.drv |
Linter | Message | Location |
---|---|---|
No lint warnings ✓ |