• Stars
    star
    178
  • Rank 207,121 (Top 5 %)
  • Language
    R
  • License
    Other
  • Created about 12 years ago
  • Updated over 2 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

talk with NCBI entrez using R

Build Status Build status Coverage Status CRAN DOI

rentrez

rentrez provides functions that work with the NCBI Eutils API to search, download data from, and otherwise interact with NCBI databases.

Install

rentrez is on CRAN, so you can get the latest stable release with install.packages("rentrez"). This repository will sometimes be a little ahead of the CRAN version, if you want the latest (and possibly greatest) version you can install the current github version using Hadley Wickham's devtools.

library(devtools)
install_github("ropensci/rentrez")

The EUtils API

Each of the functions exported by rentrez is documented, and this README and the package vignette provide examples of how to use the functions together as part of a workflow. The API itself is well-documented. Be sure to read the official documentation to get the most out of API. In particular, be aware of the NCBI's usage policies and try to limit very large requests to off peak (USA) times (rentrez takes care of limiting the number of requests per second, and setting the appropriate entrez tool name in each request).

Hopefully this README, and the package's vignette and in-line documentation, provide you with enough information to get started with rentrez. If you need more help, or if you discover a bug in rentrez please let us know, either through one of the contact methods described here, or by filing an issue

Examples

In many cases, doing something interesting with EUtils will take multiple calls. Here are a few examples of how the functions work together (check out the package vignette for others).

Getting data from that great paper you've just read

Let's say I've just read a paper on the evolution of Hox genes, Di-Poi et al. (2010), and I want to get the data required to replicate their results. First, I need the unique ID for this paper in pubmed (the PMID). Unfortunately, many journals don't give PMIDS for their papers, but we can use entrez_search to find the paper using the doi field:

library(rentrez)
hox_paper <- entrez_search(db="pubmed", term="10.1038/nature08789[doi]")
hox_paper$ids
# [1] "20203609"

Now, what sorts of data are available from other NCBI database for this paper?

hox_data <- entrez_link(db="all", id=hox_paper$ids, dbfrom="pubmed")
hox_data
# elink object with contents:
#  $links: IDs for linked records from NCBI
# 

In this case all the data is in the links element:

hox_data$links
# elink result with information from 14 databases:
#  [1] pubmed_medgen              pubmed_pmc_refs           
#  [3] pubmed_pubmed              pubmed_pubmed_alsoviewed  
#  [5] pubmed_pubmed_citedin      pubmed_pubmed_combined    
#  [7] pubmed_pubmed_five         pubmed_pubmed_reviews     
#  [9] pubmed_pubmed_reviews_five pubmed_mesh_major         
# [11] pubmed_nuccore             pubmed_nucleotide         
# [13] pubmed_protein             pubmed_taxonomy_entrez

Each of the character vectors in this object contain unique IDs for records in the named databases. These functions try to make the most useful bits of the returned files available to users, but they also return the original file in case you want to dive into the XML yourself.

In this case we'll get the protein sequences as fasta files, using ' entrez_fetch:

hox_proteins <- entrez_fetch(db="protein", id=hox_data$links$pubmed_protein, rettype="fasta")
# No encoding supplied: defaulting to UTF-8.
cat(substr(hox_proteins, 1, 237))
# >gi|290760438|gb|ADD54588.1| HOXA10, partial [Saiphos equalis]
# MACSESPAANSFLVDSLISSASVRGEGGGGGGGGGGAGGGGGEGGGGGGGVYYPNNSSVYLPQTSELSYG
# LPSYGLFPVLSKRNEGPSQSMVPASHTYMSGMEVWLDPPRSCRLEDPESPQATSCSFTPNIKEENSYCLY
# DSDKGPKEATATDLSTFPRLTSEVCSMNNV

Retrieving datasets associated a particular organism.

I like spiders. So let's say I want to learn a little more about New Zealand's endemic "black widow" the katipo. Specifically, in the past the katipo has been split into two species, can we make a phylogeny to test this idea?

The first step here is to use the function entrez_search to find datasets that include katipo sequences. The popset database has sequences arising from phylogenetic or population-level studies, so let's start there.

library(rentrez)
katipo_search <- entrez_search(db="popset", term="Latrodectus katipo[Organism]")
katipo_search$count
# [1] 6

In this search count is the total number of hits returned for the search term. We can use entrez_summary to learn a little about these datasets. rentrez will parse this xml into a list of esummary records, with each list entry corresponding to one of the IDs it is passed. In this case we get six records, and we see what each one contains like so:

katipo_summs <- entrez_summary(db="popset", id=katipo_search$ids)
katipo_summs
# List of  6 esummary records. First record:
# 
#  $`167843272`
# esummary result with 17 items:
#  [1] uid        caption    title      extra      gi         settype   
#  [7] createdate updatedate flags      taxid      authors    article   
# [13] journal    strain     statistics properties oslt

An we can extract specific elements from list of summary records with extract_from_esummary:

titles <- extract_from_esummary(katipo_summs, "title")
unname(titles)
# [1] "Latrodectus katipo 18S ribosomal RNA gene, partial sequence; internal transcribed spacer 1, 5.8S ribosomal RNA gene, and internal transcribed spacer 2, complete sequence; and 28S ribosomal RNA gene, partial sequence."
# [2] "Latrodectus katipo cytochrome oxidase subunit 1 (COI) gene, partial cds; mitochondrial."                                                                                                                                 
# [3] "Latrodectus 18S ribosomal RNA gene, partial sequence; internal transcribed spacer 1, 5.8S ribosomal RNA gene, and internal transcribed spacer 2, complete sequence; and 28S ribosomal RNA gene, partial sequence."       
# [4] "Latrodectus cytochrome oxidase subunit 1 (COI) gene, partial cds; mitochondrial."                                                                                                                                        
# [5] "Latrodectus tRNA-Leu (trnL) gene, partial sequence; and NADH dehydrogenase subunit 1 (ND1) gene, partial cds; mitochondrial."                                                                                            
# [6] "Theridiidae cytochrome oxidase subunit I (COI) gene, partial cds; mitochondrial."

Let's just get the two mitochondrial loci (COI and trnL), using entrez_fetch:

COI_ids <- katipo_search$ids[c(2,6)]
trnL_ids <- katipo_search$ids[5]
COI <- entrez_fetch(db="popset", id=COI_ids, rettype="fasta")
# No encoding supplied: defaulting to UTF-8.
trnL <- entrez_fetch(db="popset", id=trnL_ids, rettype="fasta")
# No encoding supplied: defaulting to UTF-8.

The "fetched" results are fasta formatted characters, which can be written to disk easily:

write(COI, "Test/COI.fasta")
write(trnL, "Test/trnL.fasta")

Once you've got the sequences you can do what you want with them, but I wanted a phylogeny and we can do that entirly within R. To get a nice tree with legible tip labels I'm gong to use stringr to extract just the species names and ape to built and root and neighbor joining tree:

library(ape)
tf <- tempfile()
write(COI, tf)
coi <- read.dna(tf, format="fasta")
coi_aligned <- muscle(coi)
tree <- nj(dist.dna(coi_aligned))
tree$tip.label <- stringr::str_extract(tree$tip.label, "Steatoda [a-z]+|Latrodectus [a-z]+")
plot( root(tree, outgroup="Steatoda grossa" ), cex=0.8)

web_history and big queries

The NCBI provides search history features, which can be useful for dealing with large lists of IDs or repeated searches.

As an example, imagine you wanted to learn something about all of the SNPs in the non-recombing portion of the Y chromsome in humans. You could first find these SNPs using entrez_search, using the "CHR" (chromosome) and "CPOS" (position in chromosome) to specify the region of interest. (The syntax for these search terms is described in the vignette and the documentation for entrez_search):

snp_search <- entrez_search(db="snp", 
                            term="(Y[CHR] AND Homo[ORGN]) NOT 10001:2781479[CPOS]")
snp_search
# Entrez search result with 234154 hits (object contains 20 IDs and no web_history object)
#  Search term (as translated):  (Y[CHR] AND "Homo"[Organism]) NOT 10001[CHRPOS] :  ...

When I wrote this that was a little over 200 000 SNPs. It's probably not a good idea to set retmax to 200 000 and just download all of those identifiers. Instead, we could store this list of IDs on the NCBI's server and refer to them in later calles to functions like entrez_link and entrez_fetch that accept a web history object.

snp_search <- entrez_search(db="snp", 
                            term="(Y[CHR] AND Homo[ORGN]) NOT 10001:2781479[CPOS]", 
                            use_history = TRUE)
snp_search
# Entrez search result with 234154 hits (object contains 20 IDs and a web_history object)
#  Search term (as translated):  (Y[CHR] AND "Homo"[Organism]) NOT 10001[CHRPOS] :  ...

As you can see, the result of the search now includes a web_history object. We can use that object to refer to these IDs in later calls. Heree we will just fetch complete records of the first 5 SNPs.

recs <- entrez_fetch(db="snp", web_history=snp_search$web_history, retmax=5, rettype="xml", parsed=TRUE)
class(recs)
# [1] "XMLInternalDocument" "XMLAbstractDocument"

The records come to us as parsed XML objects, which you could futher process with the XML library or write to disk for later use.

Getting information about NCBI databases

Most of the examples above required some background information about what databases NCBI has to offer, and how they can be searched. rentrez provides a set of functions with names starting entrez_db that help you to discover this information in an interactive session.

First up, entrez_dbs() gives you a list of database names

entrez_dbs()
#  [1] "pubmed"          "protein"         "nuccore"        
#  [4] "nucleotide"      "nucgss"          "nucest"         
#  [7] "structure"       "genome"          "annotinfo"      
# [10] "assembly"        "bioproject"      "biosample"      
# [13] "blastdbinfo"     "books"           "cdd"            
# [16] "clinvar"         "clone"           "gap"            
# [19] "gapplus"         "grasp"           "dbvar"          
# [22] "epigenomics"     "gene"            "gds"            
# [25] "geoprofiles"     "homologene"      "medgen"         
# [28] "mesh"            "ncbisearch"      "nlmcatalog"     
# [31] "omim"            "orgtrack"        "pmc"            
# [34] "popset"          "probe"           "proteinclusters"
# [37] "pcassay"         "biosystems"      "pccompound"     
# [40] "pcsubstance"     "pubmedhealth"    "seqannot"       
# [43] "snp"             "sra"             "taxonomy"       
# [46] "unigene"         "gencoll"         "gtr"

Some of the names are a little opaque, so you can get some more descriptive information about each with entrez_db_summary()

entrez_db_summary("cdd")
#  DbName: cdd
#  MenuName: Conserved Domains
#  Description: Conserved Domain Database
#  DbBuild: Build150814-1106.1
#  Count: 50648
#  LastUpdate: 2015/08/14 18:42

entrez_db_searchable() lets you discover the fields available for search terms for a given database. You get back a named-list, with names are fields. Each element has additional information about each named search field (you can also use as.data.frame to create a dataframe, with one search-field per row):

search_fields <- entrez_db_searchable("pmc")
search_fields$GRNT
#  Name: GRNT
#  FullName: Grant Number
#  Description: NIH Grant Numbers
#  TermCount: 2272841
#  IsDate: N
#  IsNumerical: N
#  SingleToken: Y
#  Hierarchy: N
#  IsHidden: N

Finally, entrez_db_links takes a database name, and returns a list of other NCBI databases which might contain linked-records.

entrez_db_links("omim")
# Databases with linked records for database 'omim'
#  [1] biosample   biosystems  books       clinvar     dbvar      
#  [6] gene        genetests   geoprofiles gtr         homologene 
# [11] mapview     medgen      medgen      nuccore     nucest     
# [16] nucgss      omim        pcassay     pccompound  pcsubstance
# [21] pmc         protein     pubmed      pubmed      sra        
# [26] structure   unigene

Trendy topics in genetics

This is one is a little more trivial, but you can also use entrez to search pubmed and the EUtils API allows you to limit searches by the year in which the paper was published. That gives is a chance to find the trendiest -omics going around (this has quite a lot of repeated searching, so it you want to run your own version be sure to do it in off peak times).

Let's start by making a function that finds the number of records matching a given search term for each of several years (using the mindate and maxdate terms from the Eutils API):

library(rentrez)
papers_by_year <- function(years, search_term){
    return(sapply(years, function(y) entrez_search(db="pubmed",term=search_term, mindate=y, maxdate=y, retmax=0)$count))
}

With that we can fetch the data for each term and, by searching with no term, find the total number of papers published in each year:

years <- 1990:2015
total_papers <- papers_by_year(years, "")
omics <- c("genomic", "epigenomic", "metagenomic", "proteomic", "transcriptomic", "pharmacogenomic", "connectomic" )
trend_data <- sapply(omics, function(t) papers_by_year(years, t))
trend_props <- trend_data/total_papers

That's the data, let's plot it:

library(reshape)
library(ggplot2)
trend_df <- melt(data.frame(years, trend_props), id.vars="years")
p <- ggplot(trend_df, aes(years, value, colour=variable))
p + geom_line(size=1) + scale_y_log10("number of papers")

Giving us... well this:


This package is part of a richer suite called fulltext, along with several other packages, that provides the ability to search for and retrieve full text of open access scholarly articles.


More Repositories

1

drake

An R-focused pipeline toolkit for reproducibility and high-performance computing
R
1,329
star
2

skimr

A frictionless, pipeable approach to dealing with summary statistics
HTML
1,095
star
3

targets

Function-oriented Make-like declarative workflows for R
R
854
star
4

rtweet

๐Ÿฆ R client for interacting with Twitter's [stream and REST] APIs
R
785
star
5

tabulizer

Bindings for Tabula PDF Table Extractor Library
R
518
star
6

pdftools

Text Extraction, Rendering and Converting of PDF Documents
C++
489
star
7

magick

Magic, madness, heaven, sin
R
440
star
8

visdat

Preliminary Exploratory Visualisation of Data
R
439
star
9

stplanr

Sustainable transport planning with R
R
412
star
10

RSelenium

An R client for Selenium Remote WebDriver
R
332
star
11

rnoaa

R interface to many NOAA data APIs
R
320
star
12

osmdata

R package for downloading OpenStreetMap data
C++
307
star
13

charlatan

Create fake data in R
R
283
star
14

software-review

rOpenSci Software Peer Review.
R
279
star
15

iheatmapr

Complex, interactive heatmaps in R
R
259
star
16

taxize

A taxonomic toolbelt for R
R
250
star
17

rrrpkg

Use of an R package to facilitate reproducible research
248
star
18

elastic

R client for the Elasticsearch HTTP API
R
244
star
19

tesseract

Bindings to Tesseract OCR engine for R
R
236
star
20

qualtRics

Download โฌ‡๏ธ Qualtrics survey data directly into R!
R
213
star
21

git2r

R bindings to the libgit2 library
R
213
star
22

biomartr

Genomic Data Retrieval with R
R
203
star
23

writexl

Portable, light-weight data frame to xlsx exporter for R
C
202
star
24

rnaturalearth

An R package to hold and facilitate interaction with natural earth map data ๐ŸŒ
R
191
star
25

googleLanguageR

R client for the Google Translation API, Google Cloud Natural Language API and Google Cloud Speech API
HTML
190
star
26

textreuse

Detect text reuse and document similarity
R
188
star
27

tokenizers

Fast, Consistent Tokenization of Natural Language Text
R
179
star
28

piggyback

๐Ÿ“ฆ for using large(r) data files on GitHub
R
172
star
29

rcrossref

R client for various CrossRef APIs
R
164
star
30

osmextract

Download and import OpenStreetMap data from Geofabrik and other providers
R
158
star
31

dataspice

๐ŸŒถ๏ธ Create lightweight schema.org descriptions of your datasets
R
155
star
32

tic

Tasks Integrating Continuously: CI-Agnostic Workflow Definitions
R
153
star
33

webchem

Chemical Information from the Web
R
149
star
34

geojsonio

Convert many data formats to & from GeoJSON & TopoJSON
R
148
star
35

MODIStsp

An "R" package for automatic download and preprocessing of MODIS Land Products Time Series
R
147
star
36

rgbif

Interface to the Global Biodiversity Information Facility API
R
146
star
37

tsbox

tsbox: Class-Agnostic Time Series in R
R
146
star
38

DataPackageR

An R package to enable reproducible data processing, packaging and sharing.
R
145
star
39

ghql

GraphQL R client
R
141
star
40

dev_guide

rOpenSci Packages: Development, Maintenance, and Peer Review
R
141
star
41

jqr

R interface to jq
R
139
star
42

osfr

R interface to the Open Science Framework (OSF)
R
136
star
43

osmplotr

Data visualisation using OpenStreetMap objects
R
130
star
44

opencv

R bindings for OpenCV
C++
130
star
45

ssh

Native SSH client in R based on libssh
C
126
star
46

tarchetypes

Archetypes for targets and pipelines
R
116
star
47

RefManageR

R package RefManageR
R
112
star
48

spocc

Species occurrence data toolkit for R
R
109
star
49

ezknitr

Avoid the typical working directory pain when using 'knitr'
R
107
star
50

hunspell

High-Performance Stemmer, Tokenizer, and Spell Checker for R
C++
106
star
51

crul

R6 based http client for R (made for developers)
R
101
star
52

gistr

Interact with GitHub gists from R
R
101
star
53

spelling

Tools for Spell Checking in R
R
101
star
54

rfishbase

R interface to the fishbase.org database
R
100
star
55

weathercan

R package for downloading weather data from Environment and Climate Change Canada
R
99
star
56

git2rdata

An R package for storing and retrieving data.frames in git repositories.
R
98
star
57

gutenbergr

Search and download public domain texts from Project Gutenberg
R
97
star
58

bib2df

Parse a BibTeX file to a tibble
R
97
star
59

ckanr

R client for the CKAN API
R
97
star
60

rsvg

SVG renderer for R based on librsvg2
C
95
star
61

UCSCXenaTools

๐Ÿ“ฆ An R package for accessing genomics data from UCSC Xena platform, from cancer multi-omics to single-cell RNA-seq https://cran.r-project.org/web/packages/UCSCXenaTools/
R
95
star
62

EML

Ecological Metadata Language interface for R: synthesis and integration of heterogenous data
R
94
star
63

nasapower

API Client for NASA POWER Global Meteorology, Surface Solar Energy and Climatology in R
R
93
star
64

cyphr

:shipit: Humane encryption
R
91
star
65

FedData

Functions to Automate Downloading Geospatial Data Available from Several Federated Data Sources
R
91
star
66

av

Working with Video in R
C
88
star
67

mapscanner

R package to print maps, draw on them, and scan them back in
R
87
star
68

opencage

๐ŸŒ R package for the OpenCage API -- both forward and reverse geocoding ๐ŸŒ
R
86
star
69

tidync

NetCDF exploration and data extraction
R
85
star
70

GSODR

API Client for Global Surface Summary of the Day ('GSOD') Weather Data Client in R
R
84
star
71

rzmq

R package for ZMQ
C++
82
star
72

gittargets

Data version control for reproducible analysis pipelines in R with {targets}.
R
80
star
73

openalexR

Getting bibliographic records from OpenAlex
R
80
star
74

bikedata

๐Ÿšฒ Extract data from public hire bicycle systems
R
79
star
75

historydata

Datasets for Historians
R
78
star
76

dittodb

dittodb: A Test Environment for DB Queries in R
R
78
star
77

arkdb

Archive and unarchive databases as flat text files
R
78
star
78

fingertipsR

R package to interact with Public Health Englandโ€™s Fingertips data tool
R
78
star
79

vcr

Record HTTP calls and replay them
R
77
star
80

rebird

Wrapper to the eBird API
R
77
star
81

smapr

An R package for acquisition and processing of NASA SMAP data
R
77
star
82

nodbi

Document DBI connector for R
R
75
star
83

CoordinateCleaner

Automated flagging of common spatial and temporal errors in biological and palaeontological collection data, for the use in conservation, ecology and palaeontology.
HTML
74
star
84

opentripplanner

An R package to set up and use OpenTripPlanner (OTP) as a local or remote multimodal trip planner.
R
73
star
85

nlrx

nlrx NetLogo R
R
71
star
86

rb3

A bunch of downloaders and parsers for data delivered from B3
R
69
star
87

tidyhydat

An R package to import Water Survey of Canada hydrometric data and make it tidy
R
69
star
88

robotstxt

robots.txt file parsing and checking for R
R
68
star
89

slopes

Package to calculate slopes of roads, rivers and trajectories
R
65
star
90

tradestatistics

R package to access Open Trade Statistics API
R
65
star
91

terrainr

Get DEMs and orthoimagery from the USGS National Map, georeference your images and merge rasters, and visualize with Unity 3D
R
64
star
92

unconf17

Website for 2017 rOpenSci Unconf
JavaScript
64
star
93

NLMR

๐Ÿ“ฆ R package to simulate neutral landscape models ๐Ÿ”
R
63
star
94

roadoi

Use Unpaywall with R
R
63
star
95

parzer

Parse geographic coordinates
R
63
star
96

tiler

Generate geographic and non-geographic map tiles from R
R
63
star
97

rWBclimate

R interface for the World Bank climate data
R
62
star
98

codemetar

an R package for generating and working with codemeta
R
62
star
99

comtradr

Functions for Interacting with the UN Comtrade API
R
60
star
100

aRxiv

Programmatic interface to the Arxiv API
R
58
star