• Stars
    star
    188
  • Rank 205,392 (Top 5 %)
  • Language
    R
  • License
    Other
  • Created over 9 years ago
  • Updated over 4 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Detect text reuse and document similarity

textreuse

CRAN_Status_Badge CRAN_Downloads Build Status Build status Coverage Status rOpenSci badge

Overview

This R package provides a set of functions for measuring similarity among documents and detecting passages which have been reused. It implements shingled n-gram, skip n-gram, and other tokenizers; similarity/dissimilarity functions; pairwise comparisons; minhash and locality sensitive hashing algorithms; and a version of the Smith-Waterman local alignment algorithm suitable for natural language. It is broadly useful for, for example, detecting duplicate documents in a corpus prior to text analysis, or for identifying borrowed passages between texts. The classes provides by this package follow the model of other natural language processing packages for R, especially the NLP and tm packages. (However, this package has no dependency on Java, which should make it easier to install.)

Citation

If you use this package for scholarly research, I would appreciate a citation.

citation("textreuse")
#> 
#> To cite package 'textreuse' in publications use:
#> 
#>   Lincoln Mullen (2020). textreuse: Detect Text Reuse and Document
#>   Similarity. https://docs.ropensci.org/textreuse,
#>   https://github.com/ropensci/textreuse.
#> 
#> A BibTeX entry for LaTeX users is
#> 
#>   @Manual{,
#>     title = {textreuse: Detect Text Reuse and Document Similarity},
#>     author = {Lincoln Mullen},
#>     year = {2020},
#>     note = {https://docs.ropensci.org/textreuse, https://github.com/ropensci/textreuse},
#>   }

Installation

To install this package from CRAN:

install.packages("textreuse")

To install the development version from GitHub, use devtools.

# install.packages("devtools")
devtools::install_github("ropensci/textreuse", build_vignettes = TRUE)

Examples

There are three main approaches that one may take when using this package: pairwise comparisons, minhashing/locality sensitive hashing, and extracting matching passages through text alignment.

See the introductory vignette for a description of the classes provided by this package.

vignette("textreuse-introduction", package = "textreuse")

Pairwise comparisons

In this example we will load a tiny corpus of three documents. These documents are drawn from Kellen Funkโ€™s research into the propagation of legal codes of civil procedure in the nineteenth-century United States.

library(textreuse)
dir <- system.file("extdata/legal", package = "textreuse")
corpus <- TextReuseCorpus(dir = dir, meta = list(title = "Civil procedure"),
                          tokenizer = tokenize_ngrams, n = 7)

We have loaded the three documents into a corpus, which involves tokenizing the text and hashing the tokens. We can inspect the corpus as a whole or the individual documents that make it up.

corpus
#> TextReuseCorpus
#> Number of documents: 3 
#> hash_func : hash_string 
#> title : Civil procedure 
#> tokenizer : tokenize_ngrams
names(corpus)
#> [1] "ca1851-match"   "ca1851-nomatch" "ny1850-match"
corpus[["ca1851-match"]]
#> TextReuseTextDocument
#> file : /Users/lmullen/R/library/textreuse/extdata/legal/ca1851-match.txt 
#> hash_func : hash_string 
#> id : ca1851-match 
#> minhash_func : 
#> tokenizer : tokenize_ngrams 
#> content : ยง 4. Every action shall be prosecuted in the name of the real party
#> in interest, except as otherwise provided in this Act.
#> 
#> ยง 5. In the case of an assignment of a thing in action, the action by
#> the as

Now we can compare each of the documents to one another. The pairwise_compare() function applies a comparison function (in this case, jaccard_similarity()) to every pair of documents. The result is a matrix of scores. As we would expect, some documents are similar and others are not.

comparisons <- pairwise_compare(corpus, jaccard_similarity)
comparisons
#>                ca1851-match ca1851-nomatch ny1850-match
#> ca1851-match             NA              0    0.3842549
#> ca1851-nomatch           NA             NA    0.0000000
#> ny1850-match             NA             NA           NA

We can convert that matrix to a data frame of pairs and scores if we prefer.

pairwise_candidates(comparisons)
#> # A tibble: 3 x 3
#>   a              b              score
#> * <chr>          <chr>          <dbl>
#> 1 ca1851-match   ca1851-nomatch 0    
#> 2 ca1851-match   ny1850-match   0.384
#> 3 ca1851-nomatch ny1850-match   0

See the pairwise vignette for a fuller description.

vignette("textreuse-pairwise", package = "textreuse")

Minhashing and locality sensitive hashing

Pairwise comparisons can be very time-consuming because they grow geometrically with the size of the corpus. (A corpus with 10 documents would require at least 45 comparisons; a corpus with 100 documents would require 4,950 comparisons; a corpus with 1,000 documents would require 499,500 comparisons.) Thatโ€™s why this package implements the minhash and locality sensitive hashing algorithms, which can detect candidate pairs much faster than pairwise comparisons in corpora of any significant size.

For this example we will load a small corpus of ten documents published by the American Tract Society. We will also create a minhash function, which represents an entire document (regardless of length) by a fixed number of integer hashes. When we create the corpus, the documents will each have a minhash signature.

dir <- system.file("extdata/ats", package = "textreuse")
minhash <- minhash_generator(200, seed = 235)
ats <- TextReuseCorpus(dir = dir,
                       tokenizer = tokenize_ngrams, n = 5,
                       minhash_func = minhash)

Now we can calculate potential matches, extract the candidates, and apply a comparison function to just those candidates.

buckets <- lsh(ats, bands = 50, progress = FALSE)
candidates <- lsh_candidates(buckets)
scores <- lsh_compare(candidates, ats, jaccard_similarity, progress = FALSE)
scores
#> # A tibble: 2 x 3
#>   a                     b                      score
#>   <chr>                 <chr>                  <dbl>
#> 1 practicalthought00nev thoughtsonpopery00nevi 0.463
#> 2 remember00palm        remembermeorholy00palm 0.701

For details, see the minhash vignette.

vignette("textreuse-minhash", package = "textreuse")

Text alignment

We can also extract the optimal alignment between to documents with a version of the Smith-Waterman algorithm, used for protein sequence alignment, adapted for natural language. The longest matching substring according to scoring values will be extracted, and variations in the alignment will be marked.

a <- "'How do I know', she asked, 'if this is a good match?'"
b <- "'This is a match', he replied."
align_local(a, b)
#> TextReuse alignment
#> Alignment score: 7 
#> Document A:
#> this is a good match
#> 
#> Document B:
#> This is a #### match

For details, see the text alignment vignette.

vignette("textreuse-alignment", package = "textreuse")

Parallel processing

Loading the corpus and creating tokens benefit from using multiple cores, if available. (This works only on non-Windows machines.) To use multiple cores, set options("mc.cores" = 4L), where the number is how many cores you wish to use.

Contributing and acknowledgments

Please note that this project is released with a Contributor Code of Conduct. By participating in this project you agree to abide by its terms.

Thanks to Noam Ross for his thorough peer review of this package for rOpenSci.


rOpenSCi logo

More Repositories

1

drake

An R-focused pipeline toolkit for reproducibility and high-performance computing
R
1,339
star
2

skimr

A frictionless, pipeable approach to dealing with summary statistics
HTML
1,108
star
3

targets

Function-oriented Make-like declarative workflows for R
R
912
star
4

rtweet

๐Ÿฆ R client for interacting with Twitter's [stream and REST] APIs
R
785
star
5

tabulizer

Bindings for Tabula PDF Table Extractor Library
R
518
star
6

pdftools

Text Extraction, Rendering and Converting of PDF Documents
C++
489
star
7

magick

Magic, madness, heaven, sin
R
440
star
8

visdat

Preliminary Exploratory Visualisation of Data
R
439
star
9

stplanr

Sustainable transport planning with R
R
417
star
10

RSelenium

An R client for Selenium Remote WebDriver
R
332
star
11

rnoaa

R interface to many NOAA data APIs
R
328
star
12

osmdata

R package for downloading OpenStreetMap data
R
315
star
13

charlatan

Create fake data in R
R
291
star
14

software-review

rOpenSci Software Peer Review.
R
279
star
15

iheatmapr

Complex, interactive heatmaps in R
R
259
star
16

taxize

A taxonomic toolbelt for R
R
250
star
17

rrrpkg

Use of an R package to facilitate reproducible research
248
star
18

elastic

R client for the Elasticsearch HTTP API
R
244
star
19

tesseract

Bindings to Tesseract OCR engine for R
R
236
star
20

git2r

R bindings to the libgit2 library
R
216
star
21

qualtRics

Download โฌ‡๏ธ Qualtrics survey data directly into R!
R
215
star
22

biomartr

Genomic Data Retrieval with R
R
212
star
23

writexl

Portable, light-weight data frame to xlsx exporter for R
C
202
star
24

googleLanguageR

R client for the Google Translation API, Google Cloud Natural Language API and Google Cloud Speech API
HTML
194
star
25

rnaturalearth

An R package to hold and facilitate interaction with natural earth map data ๐ŸŒ
R
191
star
26

piggyback

๐Ÿ“ฆ for using large(r) data files on GitHub
R
182
star
27

tokenizers

Fast, Consistent Tokenization of Natural Language Text
R
179
star
28

rentrez

talk with NCBI entrez using R
R
178
star
29

rcrossref

R client for various CrossRef APIs
R
166
star
30

osmextract

Download and import OpenStreetMap data from Geofabrik and other providers
R
166
star
31

dataspice

๐ŸŒถ๏ธ Create lightweight schema.org descriptions of your datasets
R
159
star
32

rgbif

Interface to the Global Biodiversity Information Facility API
R
155
star
33

tic

Tasks Integrating Continuously: CI-Agnostic Workflow Definitions
R
153
star
34

webchem

Chemical Information from the Web
R
149
star
35

geojsonio

Convert many data formats to & from GeoJSON & TopoJSON
R
148
star
36

tsbox

tsbox: Class-Agnostic Time Series in R
R
148
star
37

MODIStsp

An "R" package for automatic download and preprocessing of MODIS Land Products Time Series
R
147
star
38

ghql

GraphQL R client
R
145
star
39

DataPackageR

An R package to enable reproducible data processing, packaging and sharing.
R
145
star
40

dev_guide

rOpenSci Packages: Development, Maintenance, and Peer Review
R
141
star
41

osfr

R interface to the Open Science Framework (OSF)
R
140
star
42

jqr

R interface to jq
R
139
star
43

tarchetypes

Archetypes for targets and pipelines
R
130
star
44

osmplotr

Data visualisation using OpenStreetMap objects
R
130
star
45

opencv

R bindings for OpenCV
C++
130
star
46

ssh

Native SSH client in R based on libssh
C
126
star
47

RefManageR

R package RefManageR
R
114
star
48

ezknitr

Avoid the typical working directory pain when using 'knitr'
R
112
star
49

spocc

Species occurrence data toolkit for R
R
109
star
50

hunspell

High-Performance Stemmer, Tokenizer, and Spell Checker for R
C++
106
star
51

weathercan

R package for downloading weather data from Environment and Climate Change Canada
R
102
star
52

crul

R6 based http client for R (for developers)
R
102
star
53

UCSCXenaTools

๐Ÿ“ฆ An R package for accessing genomics data from UCSC Xena platform, from cancer multi-omics to single-cell RNA-seq https://cran.r-project.org/web/packages/UCSCXenaTools/
R
102
star
54

gistr

Interact with GitHub gists from R
R
101
star
55

spelling

Tools for Spell Checking in R
R
101
star
56

rfishbase

R interface to the fishbase.org database
R
100
star
57

gutenbergr

Search and download public domain texts from Project Gutenberg
R
99
star
58

git2rdata

An R package for storing and retrieving data.frames in git repositories.
R
99
star
59

openalexR

Getting bibliographic records from OpenAlex
R
98
star
60

bib2df

Parse a BibTeX file to a tibble
R
97
star
61

ckanr

R client for the CKAN API
R
97
star
62

nasapower

API Client for NASA POWER Global Meteorology, Surface Solar Energy and Climatology in R
R
96
star
63

rsvg

SVG renderer for R based on librsvg2
C
95
star
64

EML

Ecological Metadata Language interface for R: synthesis and integration of heterogenous data
R
94
star
65

FedData

Functions to Automate Downloading Geospatial Data Available from Several Federated Data Sources
R
94
star
66

cyphr

:shipit: Humane encryption
R
93
star
67

GSODR

API Client for Global Surface Summary of the Day (GSOD) Weather Data Client in R
R
90
star
68

mapscanner

R package to print maps, draw on them, and scan them back in
R
88
star
69

av

Working with Video in R
C
88
star
70

opencage

๐ŸŒ R package for the OpenCage API -- both forward and reverse geocoding ๐ŸŒ
R
87
star
71

gittargets

Data version control for reproducible analysis pipelines in R with {targets}.
R
85
star
72

tidync

NetCDF exploration and data extraction
R
85
star
73

historydata

Datasets for Historians
R
83
star
74

rzmq

R package for ZMQ
C++
82
star
75

CoordinateCleaner

Automated flagging of common spatial and temporal errors in biological and palaeontological collection data, for the use in conservation, ecology and palaeontology.
HTML
79
star
76

rebird

Wrapper to the eBird API
R
79
star
77

smapr

An R package for acquisition and processing of NASA SMAP data
R
79
star
78

bikedata

๐Ÿšฒ Extract data from public hire bicycle systems
R
79
star
79

dittodb

dittodb: A Test Environment for DB Queries in R
R
78
star
80

arkdb

Archive and unarchive databases as flat text files
R
78
star
81

fingertipsR

R package to interact with Public Health Englandโ€™s Fingertips data tool
R
78
star
82

vcr

Record HTTP calls and replay them
R
77
star
83

nodbi

Document DBI connector for R
R
76
star
84

opentripplanner

An R package to set up and use OpenTripPlanner (OTP) as a local or remote multimodal trip planner.
R
73
star
85

nlrx

nlrx NetLogo R
R
71
star
86

slopes

Package to calculate slopes of roads, rivers and trajectories
R
70
star
87

tidyhydat

An R package to import Water Survey of Canada hydrometric data and make it tidy
R
70
star
88

rb3

A bunch of downloaders and parsers for data delivered from B3
R
69
star
89

robotstxt

robots.txt file parsing and checking for R
R
68
star
90

codemetar

an R package for generating and working with codemeta
R
66
star
91

tradestatistics

R package to access Open Trade Statistics API
R
65
star
92

unconf17

Website for 2017 rOpenSci Unconf
JavaScript
64
star
93

roadoi

Use Unpaywall with R
R
64
star
94

terrainr

Get DEMs and orthoimagery from the USGS National Map, georeference your images and merge rasters, and visualize with Unity 3D
R
64
star
95

tiler

Generate geographic and non-geographic map tiles from R
R
64
star
96

comtradr

Functions for Interacting with the UN Comtrade API
R
64
star
97

NLMR

๐Ÿ“ฆ R package to simulate neutral landscape models ๐Ÿ”
R
63
star
98

parzer

Parse geographic coordinates
R
63
star
99

rWBclimate

R interface for the World Bank climate data
R
62
star
100

stats19

R package for working with open road traffic casualty data from Great Britain
R
61
star