• Stars
    star
    179
  • Rank 205,938 (Top 5 %)
  • Language
    R
  • License
    Other
  • Created about 8 years ago
  • Updated over 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Fast, Consistent Tokenization of Natural Language Text

tokenizers

CRAN_Status_Badge DOI rOpenSci peer review CRAN_Downloads Travis-CI Build Status Coverage Status

Overview

This R package offers functions with a consistent interface to convert natural language text into tokens. It includes tokenizers for shingled n-grams, skip n-grams, words, word stems, sentences, paragraphs, characters, shingled characters, lines, Penn Treebank, and regular expressions, as well as functions for counting characters, words, and sentences, and a function for splitting longer texts into separate documents, each with the same number of words. The package is built on the stringi and Rcpp packages for fast yet correct tokenization in UTF-8.

See the β€œIntroduction to the tokenizers Package” vignette for an overview of all the functions in this package.

This package complies with the standards for input and output recommended by the Text Interchange Formats. The TIF initiative was created at an rOpenSci meeting in 2017, and its recommendations are available as part of the tif package. See the β€œThe Text Interchange Formats and the tokenizers Package” vignette for an explanation of how this package fits into that ecosystem.

Suggested citation

If you use this package for your research, we would appreciate a citation.

citation("tokenizers")
#> 
#> To cite the tokenizers package in publications, please cite the paper
#> in the Journal of Open Source Software:
#> 
#>   Lincoln A. Mullen et al., "Fast, Consistent Tokenization of Natural
#>   Language Text," Journal of Open Source Software 3, no. 23 (2018):
#>   655, https://doi.org/10.21105/joss.00655.
#> 
#> A BibTeX entry for LaTeX users is
#> 
#>   @Article{,
#>     title = {Fast, Consistent Tokenization of Natural Language Text},
#>     author = {Lincoln A. Mullen and Kenneth Benoit and Os Keyes and Dmitry Selivanov and Jeffrey Arnold},
#>     journal = {Journal of Open Source Software},
#>     year = {2018},
#>     volume = {3},
#>     issue = {23},
#>     pages = {655},
#>     url = {https://doi.org/10.21105/joss.00655},
#>     doi = {10.21105/joss.00655},
#>   }

Examples

The tokenizers in this package have a consistent interface. They all take either a character vector of any length, or a list where each element is a character vector of length one, or a data.frame that adheres to the tif corpus format. The idea is that each element (or row) comprises a text. Then each function returns a list with the same length as the input vector, where each element in the list contains the tokens generated by the function. If the input character vector or list is named, then the names are preserved, so that the names can serve as identifiers. For a tif-formatted data.frame, the doc_id field is used as the element names in the returned token list.

library(magrittr)
library(tokenizers)

james <- paste0(
  "The question thus becomes a verbal one\n",
  "again; and our knowledge of all these early stages of thought and feeling\n",
  "is in any case so conjectural and imperfect that farther discussion would\n",
  "not be worth while.\n",
  "\n",
  "Religion, therefore, as I now ask you arbitrarily to take it, shall mean\n",
  "for us _the feelings, acts, and experiences of individual men in their\n",
  "solitude, so far as they apprehend themselves to stand in relation to\n",
  "whatever they may consider the divine_. Since the relation may be either\n",
  "moral, physical, or ritual, it is evident that out of religion in the\n",
  "sense in which we take it, theologies, philosophies, and ecclesiastical\n",
  "organizations may secondarily grow.\n"
)
names(james) <- "varieties"

tokenize_characters(james)[[1]] %>% head(50)
#>  [1] "t" "h" "e" "q" "u" "e" "s" "t" "i" "o" "n" "t" "h" "u" "s" "b" "e" "c" "o"
#> [20] "m" "e" "s" "a" "v" "e" "r" "b" "a" "l" "o" "n" "e" "a" "g" "a" "i" "n" "a"
#> [39] "n" "d" "o" "u" "r" "k" "n" "o" "w" "l" "e" "d"
tokenize_character_shingles(james)[[1]] %>% head(20)
#>  [1] "the" "heq" "equ" "que" "ues" "est" "sti" "tio" "ion" "ont" "nth" "thu"
#> [13] "hus" "usb" "sbe" "bec" "eco" "com" "ome" "mes"
tokenize_words(james)[[1]] %>% head(10)
#>  [1] "the"      "question" "thus"     "becomes"  "a"        "verbal"  
#>  [7] "one"      "again"    "and"      "our"
tokenize_word_stems(james)[[1]] %>% head(10)
#>  [1] "the"      "question" "thus"     "becom"    "a"        "verbal"  
#>  [7] "one"      "again"    "and"      "our"
tokenize_sentences(james) 
#> $varieties
#> [1] "The question thus becomes a verbal one again; and our knowledge of all these early stages of thought and feeling is in any case so conjectural and imperfect that farther discussion would not be worth while."                                               
#> [2] "Religion, therefore, as I now ask you arbitrarily to take it, shall mean for us _the feelings, acts, and experiences of individual men in their solitude, so far as they apprehend themselves to stand in relation to whatever they may consider the divine_."
#> [3] "Since the relation may be either moral, physical, or ritual, it is evident that out of religion in the sense in which we take it, theologies, philosophies, and ecclesiastical organizations may secondarily grow."
tokenize_paragraphs(james)
#> $varieties
#> [1] "The question thus becomes a verbal one again; and our knowledge of all these early stages of thought and feeling is in any case so conjectural and imperfect that farther discussion would not be worth while."                                                                                                                                                                                                                                                                   
#> [2] "Religion, therefore, as I now ask you arbitrarily to take it, shall mean for us _the feelings, acts, and experiences of individual men in their solitude, so far as they apprehend themselves to stand in relation to whatever they may consider the divine_. Since the relation may be either moral, physical, or ritual, it is evident that out of religion in the sense in which we take it, theologies, philosophies, and ecclesiastical organizations may secondarily grow. "
tokenize_ngrams(james, n = 5, n_min = 2)[[1]] %>% head(10)
#>  [1] "the question"                   "the question thus"             
#>  [3] "the question thus becomes"      "the question thus becomes a"   
#>  [5] "question thus"                  "question thus becomes"         
#>  [7] "question thus becomes a"        "question thus becomes a verbal"
#>  [9] "thus becomes"                   "thus becomes a"
tokenize_skip_ngrams(james, n = 5, k = 2)[[1]] %>% head(10)
#>  [1] "the"                  "the question"         "the thus"            
#>  [4] "the becomes"          "the question thus"    "the question becomes"
#>  [7] "the question a"       "the thus becomes"     "the thus a"          
#> [10] "the thus verbal"
tokenize_ptb(james)[[1]] %>% head(10)
#>  [1] "The"      "question" "thus"     "becomes"  "a"        "verbal"  
#>  [7] "one"      "again"    ";"        "and"
tokenize_lines(james)[[1]] %>% head(5)
#> [1] "The question thus becomes a verbal one"                                   
#> [2] "again; and our knowledge of all these early stages of thought and feeling"
#> [3] "is in any case so conjectural and imperfect that farther discussion would"
#> [4] "not be worth while."                                                      
#> [5] "Religion, therefore, as I now ask you arbitrarily to take it, shall mean"

The package also contains functions to count words, characters, and sentences, and these functions follow the same consistent interface.

count_words(james)
#> varieties 
#>       112
count_characters(james)
#> varieties 
#>       673
count_sentences(james)
#> varieties 
#>        13

The chunk_text() function splits a document into smaller chunks, each with the same number of words.

Contributing

Contributions to the package are more than welcome. One way that you can help is by using this package in your R package for natural language processing. If you want to contribute a tokenization function to this package, it should follow the same conventions as the rest of the functions whenever it makes sense to do so.

Please note that this project is released with a Contributor Code of Conduct. By participating in this project you agree to abide by its terms.


rOpenSCi logo

More Repositories

1

drake

An R-focused pipeline toolkit for reproducibility and high-performance computing
R
1,329
star
2

skimr

A frictionless, pipeable approach to dealing with summary statistics
HTML
1,095
star
3

targets

Function-oriented Make-like declarative workflows for R
R
854
star
4

rtweet

🐦 R client for interacting with Twitter's [stream and REST] APIs
R
785
star
5

tabulizer

Bindings for Tabula PDF Table Extractor Library
R
518
star
6

pdftools

Text Extraction, Rendering and Converting of PDF Documents
C++
489
star
7

magick

Magic, madness, heaven, sin
R
440
star
8

visdat

Preliminary Exploratory Visualisation of Data
R
439
star
9

stplanr

Sustainable transport planning with R
R
412
star
10

RSelenium

An R client for Selenium Remote WebDriver
R
332
star
11

rnoaa

R interface to many NOAA data APIs
R
320
star
12

osmdata

R package for downloading OpenStreetMap data
C++
307
star
13

charlatan

Create fake data in R
R
283
star
14

software-review

rOpenSci Software Peer Review.
R
279
star
15

iheatmapr

Complex, interactive heatmaps in R
R
259
star
16

taxize

A taxonomic toolbelt for R
R
250
star
17

rrrpkg

Use of an R package to facilitate reproducible research
248
star
18

elastic

R client for the Elasticsearch HTTP API
R
244
star
19

tesseract

Bindings to Tesseract OCR engine for R
R
236
star
20

git2r

R bindings to the libgit2 library
R
213
star
21

qualtRics

Download ⬇️ Qualtrics survey data directly into R!
R
212
star
22

biomartr

Genomic Data Retrieval with R
R
203
star
23

writexl

Portable, light-weight data frame to xlsx exporter for R
C
202
star
24

rnaturalearth

An R package to hold and facilitate interaction with natural earth map data 🌍
R
191
star
25

googleLanguageR

R client for the Google Translation API, Google Cloud Natural Language API and Google Cloud Speech API
HTML
189
star
26

textreuse

Detect text reuse and document similarity
R
188
star
27

rentrez

talk with NCBI entrez using R
R
178
star
28

piggyback

πŸ“¦ for using large(r) data files on GitHub
R
172
star
29

rcrossref

R client for various CrossRef APIs
R
164
star
30

osmextract

Download and import OpenStreetMap data from Geofabrik and other providers
R
158
star
31

dataspice

🌢️ Create lightweight schema.org descriptions of your datasets
R
155
star
32

tic

Tasks Integrating Continuously: CI-Agnostic Workflow Definitions
R
153
star
33

webchem

Chemical Information from the Web
R
149
star
34

geojsonio

Convert many data formats to & from GeoJSON & TopoJSON
R
148
star
35

MODIStsp

An "R" package for automatic download and preprocessing of MODIS Land Products Time Series
R
147
star
36

rgbif

Interface to the Global Biodiversity Information Facility API
R
146
star
37

tsbox

tsbox: Class-Agnostic Time Series in R
R
146
star
38

DataPackageR

An R package to enable reproducible data processing, packaging and sharing.
R
145
star
39

ghql

GraphQL R client
R
141
star
40

dev_guide

rOpenSci Packages: Development, Maintenance, and Peer Review
R
141
star
41

jqr

R interface to jq
R
139
star
42

osfr

R interface to the Open Science Framework (OSF)
R
136
star
43

osmplotr

Data visualisation using OpenStreetMap objects
R
130
star
44

opencv

R bindings for OpenCV
C++
130
star
45

ssh

Native SSH client in R based on libssh
C
126
star
46

tarchetypes

Archetypes for targets and pipelines
R
116
star
47

RefManageR

R package RefManageR
R
112
star
48

spocc

Species occurrence data toolkit for R
R
109
star
49

ezknitr

Avoid the typical working directory pain when using 'knitr'
R
107
star
50

hunspell

High-Performance Stemmer, Tokenizer, and Spell Checker for R
C++
106
star
51

crul

R6 based http client for R (made for developers)
R
101
star
52

gistr

Interact with GitHub gists from R
R
101
star
53

spelling

Tools for Spell Checking in R
R
101
star
54

rfishbase

R interface to the fishbase.org database
R
100
star
55

weathercan

R package for downloading weather data from Environment and Climate Change Canada
R
99
star
56

git2rdata

An R package for storing and retrieving data.frames in git repositories.
R
98
star
57

gutenbergr

Search and download public domain texts from Project Gutenberg
R
97
star
58

bib2df

Parse a BibTeX file to a tibble
R
97
star
59

ckanr

R client for the CKAN API
R
97
star
60

rsvg

SVG renderer for R based on librsvg2
C
95
star
61

UCSCXenaTools

πŸ“¦ An R package for accessing genomics data from UCSC Xena platform, from cancer multi-omics to single-cell RNA-seq https://cran.r-project.org/web/packages/UCSCXenaTools/
R
95
star
62

EML

Ecological Metadata Language interface for R: synthesis and integration of heterogenous data
R
94
star
63

nasapower

API Client for NASA POWER Global Meteorology, Surface Solar Energy and Climatology in R
R
93
star
64

cyphr

:shipit: Humane encryption
R
91
star
65

FedData

Functions to Automate Downloading Geospatial Data Available from Several Federated Data Sources
R
91
star
66

av

Working with Video in R
C
88
star
67

mapscanner

R package to print maps, draw on them, and scan them back in
R
87
star
68

opencage

🌐 R package for the OpenCage API -- both forward and reverse geocoding 🌐
R
86
star
69

tidync

NetCDF exploration and data extraction
R
85
star
70

GSODR

API Client for Global Surface Summary of the Day ('GSOD') Weather Data Client in R
R
84
star
71

rzmq

R package for ZMQ
C++
82
star
72

gittargets

Data version control for reproducible analysis pipelines in R with {targets}.
R
80
star
73

bikedata

🚲 Extract data from public hire bicycle systems
R
79
star
74

historydata

Datasets for Historians
R
78
star
75

dittodb

dittodb: A Test Environment for DB Queries in R
R
78
star
76

arkdb

Archive and unarchive databases as flat text files
R
78
star
77

fingertipsR

R package to interact with Public Health England’s Fingertips data tool
R
78
star
78

openalexR

Getting bibliographic records from OpenAlex
R
78
star
79

vcr

Record HTTP calls and replay them
R
77
star
80

rebird

Wrapper to the eBird API
R
77
star
81

smapr

An R package for acquisition and processing of NASA SMAP data
R
77
star
82

nodbi

Document DBI connector for R
R
75
star
83

CoordinateCleaner

Automated flagging of common spatial and temporal errors in biological and palaeontological collection data, for the use in conservation, ecology and palaeontology.
HTML
74
star
84

opentripplanner

An R package to set up and use OpenTripPlanner (OTP) as a local or remote multimodal trip planner.
R
73
star
85

nlrx

nlrx NetLogo R
R
71
star
86

rb3

A bunch of downloaders and parsers for data delivered from B3
R
69
star
87

tidyhydat

An R package to import Water Survey of Canada hydrometric data and make it tidy
R
69
star
88

robotstxt

robots.txt file parsing and checking for R
R
68
star
89

slopes

Package to calculate slopes of roads, rivers and trajectories
R
65
star
90

tradestatistics

R package to access Open Trade Statistics API
R
65
star
91

terrainr

Get DEMs and orthoimagery from the USGS National Map, georeference your images and merge rasters, and visualize with Unity 3D
R
64
star
92

unconf17

Website for 2017 rOpenSci Unconf
JavaScript
64
star
93

NLMR

πŸ“¦ R package to simulate neutral landscape models πŸ”
R
63
star
94

roadoi

Use Unpaywall with R
R
63
star
95

parzer

Parse geographic coordinates
R
63
star
96

tiler

Generate geographic and non-geographic map tiles from R
R
63
star
97

rWBclimate

R interface for the World Bank climate data
R
62
star
98

codemetar

an R package for generating and working with codemeta
R
62
star
99

comtradr

Functions for Interacting with the UN Comtrade API
R
60
star
100

aRxiv

Programmatic interface to the Arxiv API
R
58
star