• Stars
    star
    179
  • Rank 196,053 (Top 5 %)
  • Language
    R
  • License
    Other
  • Created over 7 years ago
  • Updated 11 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Fast, Consistent Tokenization of Natural Language Text

tokenizers

CRAN_Status_Badge DOI rOpenSci peer review CRAN_Downloads Travis-CI Build Status Coverage Status

Overview

This R package offers functions with a consistent interface to convert natural language text into tokens. It includes tokenizers for shingled n-grams, skip n-grams, words, word stems, sentences, paragraphs, characters, shingled characters, lines, Penn Treebank, and regular expressions, as well as functions for counting characters, words, and sentences, and a function for splitting longer texts into separate documents, each with the same number of words. The package is built on the stringi and Rcpp packages for fast yet correct tokenization in UTF-8.

See the โ€œIntroduction to the tokenizers Packageโ€ vignette for an overview of all the functions in this package.

This package complies with the standards for input and output recommended by the Text Interchange Formats. The TIF initiative was created at an rOpenSci meeting in 2017, and its recommendations are available as part of the tif package. See the โ€œThe Text Interchange Formats and the tokenizers Packageโ€ vignette for an explanation of how this package fits into that ecosystem.

Suggested citation

If you use this package for your research, we would appreciate a citation.

citation("tokenizers")
#> 
#> To cite the tokenizers package in publications, please cite the paper
#> in the Journal of Open Source Software:
#> 
#>   Lincoln A. Mullen et al., "Fast, Consistent Tokenization of Natural
#>   Language Text," Journal of Open Source Software 3, no. 23 (2018):
#>   655, https://doi.org/10.21105/joss.00655.
#> 
#> A BibTeX entry for LaTeX users is
#> 
#>   @Article{,
#>     title = {Fast, Consistent Tokenization of Natural Language Text},
#>     author = {Lincoln A. Mullen and Kenneth Benoit and Os Keyes and Dmitry Selivanov and Jeffrey Arnold},
#>     journal = {Journal of Open Source Software},
#>     year = {2018},
#>     volume = {3},
#>     issue = {23},
#>     pages = {655},
#>     url = {https://doi.org/10.21105/joss.00655},
#>     doi = {10.21105/joss.00655},
#>   }

Examples

The tokenizers in this package have a consistent interface. They all take either a character vector of any length, or a list where each element is a character vector of length one, or a data.frame that adheres to the tif corpus format. The idea is that each element (or row) comprises a text. Then each function returns a list with the same length as the input vector, where each element in the list contains the tokens generated by the function. If the input character vector or list is named, then the names are preserved, so that the names can serve as identifiers. For a tif-formatted data.frame, the doc_id field is used as the element names in the returned token list.

library(magrittr)
library(tokenizers)

james <- paste0(
  "The question thus becomes a verbal one\n",
  "again; and our knowledge of all these early stages of thought and feeling\n",
  "is in any case so conjectural and imperfect that farther discussion would\n",
  "not be worth while.\n",
  "\n",
  "Religion, therefore, as I now ask you arbitrarily to take it, shall mean\n",
  "for us _the feelings, acts, and experiences of individual men in their\n",
  "solitude, so far as they apprehend themselves to stand in relation to\n",
  "whatever they may consider the divine_. Since the relation may be either\n",
  "moral, physical, or ritual, it is evident that out of religion in the\n",
  "sense in which we take it, theologies, philosophies, and ecclesiastical\n",
  "organizations may secondarily grow.\n"
)
names(james) <- "varieties"

tokenize_characters(james)[[1]] %>% head(50)
#>  [1] "t" "h" "e" "q" "u" "e" "s" "t" "i" "o" "n" "t" "h" "u" "s" "b" "e" "c" "o"
#> [20] "m" "e" "s" "a" "v" "e" "r" "b" "a" "l" "o" "n" "e" "a" "g" "a" "i" "n" "a"
#> [39] "n" "d" "o" "u" "r" "k" "n" "o" "w" "l" "e" "d"
tokenize_character_shingles(james)[[1]] %>% head(20)
#>  [1] "the" "heq" "equ" "que" "ues" "est" "sti" "tio" "ion" "ont" "nth" "thu"
#> [13] "hus" "usb" "sbe" "bec" "eco" "com" "ome" "mes"
tokenize_words(james)[[1]] %>% head(10)
#>  [1] "the"      "question" "thus"     "becomes"  "a"        "verbal"  
#>  [7] "one"      "again"    "and"      "our"
tokenize_word_stems(james)[[1]] %>% head(10)
#>  [1] "the"      "question" "thus"     "becom"    "a"        "verbal"  
#>  [7] "one"      "again"    "and"      "our"
tokenize_sentences(james) 
#> $varieties
#> [1] "The question thus becomes a verbal one again; and our knowledge of all these early stages of thought and feeling is in any case so conjectural and imperfect that farther discussion would not be worth while."                                               
#> [2] "Religion, therefore, as I now ask you arbitrarily to take it, shall mean for us _the feelings, acts, and experiences of individual men in their solitude, so far as they apprehend themselves to stand in relation to whatever they may consider the divine_."
#> [3] "Since the relation may be either moral, physical, or ritual, it is evident that out of religion in the sense in which we take it, theologies, philosophies, and ecclesiastical organizations may secondarily grow."
tokenize_paragraphs(james)
#> $varieties
#> [1] "The question thus becomes a verbal one again; and our knowledge of all these early stages of thought and feeling is in any case so conjectural and imperfect that farther discussion would not be worth while."                                                                                                                                                                                                                                                                   
#> [2] "Religion, therefore, as I now ask you arbitrarily to take it, shall mean for us _the feelings, acts, and experiences of individual men in their solitude, so far as they apprehend themselves to stand in relation to whatever they may consider the divine_. Since the relation may be either moral, physical, or ritual, it is evident that out of religion in the sense in which we take it, theologies, philosophies, and ecclesiastical organizations may secondarily grow. "
tokenize_ngrams(james, n = 5, n_min = 2)[[1]] %>% head(10)
#>  [1] "the question"                   "the question thus"             
#>  [3] "the question thus becomes"      "the question thus becomes a"   
#>  [5] "question thus"                  "question thus becomes"         
#>  [7] "question thus becomes a"        "question thus becomes a verbal"
#>  [9] "thus becomes"                   "thus becomes a"
tokenize_skip_ngrams(james, n = 5, k = 2)[[1]] %>% head(10)
#>  [1] "the"                  "the question"         "the thus"            
#>  [4] "the becomes"          "the question thus"    "the question becomes"
#>  [7] "the question a"       "the thus becomes"     "the thus a"          
#> [10] "the thus verbal"
tokenize_ptb(james)[[1]] %>% head(10)
#>  [1] "The"      "question" "thus"     "becomes"  "a"        "verbal"  
#>  [7] "one"      "again"    ";"        "and"
tokenize_lines(james)[[1]] %>% head(5)
#> [1] "The question thus becomes a verbal one"                                   
#> [2] "again; and our knowledge of all these early stages of thought and feeling"
#> [3] "is in any case so conjectural and imperfect that farther discussion would"
#> [4] "not be worth while."                                                      
#> [5] "Religion, therefore, as I now ask you arbitrarily to take it, shall mean"

The package also contains functions to count words, characters, and sentences, and these functions follow the same consistent interface.

count_words(james)
#> varieties 
#>       112
count_characters(james)
#> varieties 
#>       673
count_sentences(james)
#> varieties 
#>        13

The chunk_text() function splits a document into smaller chunks, each with the same number of words.

Contributing

Contributions to the package are more than welcome. One way that you can help is by using this package in your R package for natural language processing. If you want to contribute a tokenization function to this package, it should follow the same conventions as the rest of the functions whenever it makes sense to do so.

Please note that this project is released with a Contributor Code of Conduct. By participating in this project you agree to abide by its terms.


rOpenSCi logo

More Repositories

1

drake

An R-focused pipeline toolkit for reproducibility and high-performance computing
R
1,329
star
2

skimr

A frictionless, pipeable approach to dealing with summary statistics
HTML
1,087
star
3

rtweet

๐Ÿฆ R client for interacting with Twitter's [stream and REST] APIs
R
785
star
4

targets

Function-oriented Make-like declarative workflows for R
R
774
star
5

tabulizer

Bindings for Tabula PDF Table Extractor Library
R
512
star
6

pdftools

Text Extraction, Rendering and Converting of PDF Documents
C++
474
star
7

assertr

Assertive programming for R analysis pipelines
R
453
star
8

magick

Magic, madness, heaven, sin
R
439
star
9

visdat

Preliminary Exploratory Visualisation of Data
R
439
star
10

stplanr

Sustainable transport planning with R
R
408
star
11

RSelenium

An R client for Selenium Remote WebDriver
R
332
star
12

rnoaa

R interface to many NOAA data APIs
R
319
star
13

osmdata

R package for downloading OpenStreetMap data
R
295
star
14

charlatan

Create fake data in R
R
280
star
15

software-review

rOpenSci Software Peer Review.
R
265
star
16

iheatmapr

Complex, interactive heatmaps in R
R
259
star
17

taxize

A taxonomic toolbelt for R
R
250
star
18

rrrpkg

Use of an R package to facilitate reproducible research
248
star
19

elastic

R client for the Elasticsearch HTTP API
R
243
star
20

tesseract

Bindings to Tesseract OCR engine for R
R
236
star
21

git2r

R bindings to the libgit2 library
C
204
star
22

qualtRics

Download โฌ‡๏ธ Qualtrics survey data directly into R!
R
203
star
23

writexl

Portable, light-weight data frame to xlsx exporter for R
C
196
star
24

biomartr

Genomic Data Retrieval with R
R
195
star
25

rnaturalearth

An R package to hold and facilitate interaction with natural earth map data ๐ŸŒ
R
191
star
26

googleLanguageR

R client for the Google Translation API, Google Cloud Natural Language API and Google Cloud Speech API
HTML
189
star
27

textreuse

Detect text reuse and document similarity
R
188
star
28

rentrez

talk with NCBI entrez using R
R
178
star
29

piggyback

๐Ÿ“ฆ for using large(r) data files on GitHub
R
167
star
30

rcrossref

R client for various CrossRef APIs
R
159
star
31

osmextract

Download and import OpenStreetMap data from Geofabrik and other providers
R
157
star
32

dataspice

๐ŸŒถ๏ธ Create lightweight schema.org descriptions of your datasets
R
155
star
33

tic

Tasks Integrating Continuously: CI-Agnostic Workflow Definitions
R
154
star
34

webchem

Chemical Information from the Web
R
149
star
35

geojsonio

Convert many data formats to & from GeoJSON & TopoJSON
R
148
star
36

MODIStsp

An "R" package for automatic download and preprocessing of MODIS Land Products Time Series
R
147
star
37

DataPackageR

An R package to enable reproducible data processing, packaging and sharing.
R
146
star
38

tsbox

tsbox: Class-Agnostic Time Series in R
R
143
star
39

rgbif

Interface to the Global Biodiversity Information Facility API
R
142
star
40

dev_guide

rOpenSci Packages: Development, Maintenance, and Peer Review
R
141
star
41

jqr

R interface to jq
R
137
star
42

ghql

GraphQL R client
R
136
star
43

osfr

R interface to the Open Science Framework (OSF)
R
135
star
44

osmplotr

Data visualisation using OpenStreetMap objects
R
130
star
45

opencv

R bindings for OpenCV
C++
127
star
46

ssh

Native SSH client in R based on libssh
C
126
star
47

RefManageR

R package RefManageR
R
113
star
48

spocc

Species occurrence data toolkit for R
R
109
star
49

ezknitr

Avoid the typical working directory pain when using 'knitr'
R
107
star
50

hunspell

High-Performance Stemmer, Tokenizer, and Spell Checker for R
C++
106
star
51

crul

R6 based http client for R (made for developers)
R
101
star
52

gistr

Interact with GitHub gists from R
R
101
star
53

spelling

Tools for Spell Checking in R
R
101
star
54

rfishbase

R interface to the fishbase.org database
R
100
star
55

git2rdata

An R package for storing and retrieving data.frames in git repositories.
R
99
star
56

weathercan

R package for downloading weather data from Environment and Climate Change Canada
R
97
star
57

bib2df

Parse a BibTeX file to a tibble
R
97
star
58

ckanr

R client for the CKAN API
R
97
star
59

rsvg

SVG renderer for R based on librsvg2
C
94
star
60

gutenbergr

Search and download public domain texts from Project Gutenberg
R
93
star
61

tarchetypes

Archetypes for targets and pipelines
R
92
star
62

nasapower

API Client for NASA POWER Global Meteorology, Surface Solar Energy and Climatology in R
R
92
star
63

EML

Ecological Metadata Language interface for R: synthesis and integration of heterogenous data
R
92
star
64

cyphr

:shipit: Humane encryption
R
91
star
65

FedData

Functions to Automate Downloading Geospatial Data Available from Several Federated Data Sources
R
87
star
66

UCSCXenaTools

๐Ÿ“ฆ An R package for accessing genomics data from UCSC Xena platform, from cancer multi-omics to single-cell RNA-seq https://cran.r-project.org/web/packages/UCSCXenaTools/
R
87
star
67

mapscanner

R package to print maps, draw on them, and scan them back in
R
85
star
68

tidync

NetCDF exploration and data extraction
R
85
star
69

opencage

๐ŸŒ R package for the OpenCage API -- both forward and reverse geocoding ๐ŸŒ
R
85
star
70

av

Working with Video in R
C
84
star
71

GSODR

API Client for Global Surface Summary of the Day ('GSOD') Weather Data Client in R
R
83
star
72

rzmq

R package for ZMQ
C++
82
star
73

bikedata

๐Ÿšฒ Extract data from public hire bicycle systems
R
79
star
74

historydata

Datasets for Historians
R
78
star
75

arkdb

Archive and unarchive databases as flat text files
R
78
star
76

fingertipsR

R package to interact with Public Health Englandโ€™s Fingertips data tool
R
78
star
77

vcr

Record HTTP calls and replay them
R
76
star
78

dittodb

dittodb: A Test Environment for DB Queries in R
R
76
star
79

nodbi

Document DBI connector for R
R
75
star
80

opentripplanner

An R package to set up and use OpenTripPlanner (OTP) as a local or remote multimodal trip planner.
R
73
star
81

smapr

An R package for acquisition and processing of NASA SMAP data
R
73
star
82

tidyhydat

An R package to import Water Survey of Canada hydrometric data and make it tidy
R
70
star
83

rebird

Wrapper to the eBird API
R
70
star
84

nlrx

nlrx NetLogo R
R
69
star
85

robotstxt

robots.txt file parsing and checking for R
R
68
star
86

CoordinateCleaner

Automated flagging of common spatial and temporal errors in biological and palaeontological collection data, for the use in conservation, ecology and palaeontology.
HTML
68
star
87

gittargets

Data version control for reproducible analysis pipelines in R with {targets}.
R
67
star
88

tradestatistics

R package to access Open Trade Statistics API
R
65
star
89

roadoi

Use Unpaywall with R
R
64
star
90

unconf17

Website for 2017 rOpenSci Unconf
JavaScript
64
star
91

NLMR

๐Ÿ“ฆ R package to simulate neutral landscape models ๐Ÿ”
R
63
star
92

rWBclimate

R interface for the World Bank climate data
R
62
star
93

terrainr

Get DEMs and orthoimagery from the USGS National Map, georeference your images and merge rasters, and visualize with Unity 3D
R
62
star
94

slopes

Package to calculate slopes of roads, rivers and trajectories
R
61
star
95

tiler

Generate geographic and non-geographic map tiles from R
R
61
star
96

rb3

A bunch of downloaders and parsers for data delivered from B3
R
60
star
97

codemetar

an R package for generating and working with codemeta
R
59
star
98

parzer

Parse geographic coordinates
R
59
star
99

aRxiv

Programmatic interface to the Arxiv API
R
58
star
100

rtika

R Interface to Apache Tika
R
54
star