• Stars
    star
    159
  • Rank 235,916 (Top 5 %)
  • Language
    R
  • License
    Other
  • Created over 6 years ago
  • Updated almost 3 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

🌢️ Create lightweight schema.org descriptions of your datasets

dataspice

CRAN Version CI Codecov test coverage

The goal of dataspice is to make it easier for researchers to create basic, lightweight, and concise metadata files for their datasets by editing the kind of files they’re probably most familiar with: CSVs. To spice up their data with a dash of metadata. These metadata files can then be used to:

  • Make useful information available during analysis.
  • Create a helpful dataset README webpage for your data similar to how pkgdown creates websites for R packages.
  • Produce more complex metadata formats for richer description of your datasets and to aid dataset discovery.

Metadata fields are based on Schema.org/Dataset and other metadata standards and represent a lowest common denominator which means converting between formats should be relatively straightforward.

Example

An basic example repository for demonstrating what using dataspice might look like can be found at https://github.com/amoeba/dataspice-example. From there, you can also check out a preview of the HTML dataspice generates at https://amoeba.github.io/dataspice-example and how Google sees it at https://search.google.com/test/rich-results?url=https%3A%2F%2Famoeba.github.io%2Fdataspice-example%2F.

A much more detailed example has been created by Anna Krystalli at https://annakrystalli.me/dataspice-tutorial/ (GitHub repo).

Installation

You can install the latest version from CRAN:

install.packages("dataspice")

Workflow

create_spice()
# Then fill in template CSV files, more on this below
write_spice()
build_site() # Optional

diagram showing a workflow for using dataspice

Create spice

create_spice() creates template metadata spreadsheets in a folder (by default created in the data folder in the current working directory).

The template files are:

  • biblio.csv - for title, abstract, spatial and temporal coverage, etc.
  • creators.csv - for data authors
  • attributes.csv - explains each of the variables in the dataset
  • access.csv - for files, file types, and download URLs (if appropriate)

Fill in templates

The user needs to fill in the details of the four template files. These csv files can be directly modified, or they can be edited using either the associated helper function and/or Shiny app.

Helper functions

  • prep_attributes() populates the fileName and variableName columns of the attributes.csv file using the header row of the data files.

  • prep_access() populates the fileName, name and encodingFormat columns of the access.csv file from the files in the folder containing the data.

To see an example of how prep_attributes() works, load the data files that ship with the package:

data_files <- list.files(system.file("example-dataset/", package = "dataspice"),
  pattern = ".csv",
  full.names = TRUE
)

This function assumes that the metadata templates are in a folder called metadata within a data folder.

attributes_path <- file.path("data", "metadata", "attributes.csv")

Using purrr::map(), this function can be applied over multiple files to populate the header names

data_files %>%
  purrr::map(~ prep_attributes(.x, attributes_path),
    attributes_path = attributes_path
  )

The output of prep_attributes() has the first two columns filled out:

fileName variableName description unitText
BroodTables.csv Stock.ID NA NA
BroodTables.csv Species NA NA
BroodTables.csv Stock NA NA
BroodTables.csv Ocean.Region NA NA
BroodTables.csv Region NA NA
BroodTables.csv Sub.Region NA NA

Shiny helper apps

Each of the metadata templates can be edited interactively using a Shiny app:

  • edit_attributes() opens a Shiny app that can be used to edit attributes.csv. The Shiny app displays the current attributes table and lets the user fill in an informative description and units (e.g.Β meters, hectares, etc.) for each variable.
  • edit_access() opens an editable version of access.csv
  • edit_creators() opens an editable version of creators.csv
  • edit_biblio() opens an editable version of biblio.csv

edit_attributes Shiny app

Remember to click on Save when finished editing.

Completed metadata files

The first few rows of the completed metadata tables in this example will look like this:

access.csv has one row for each file

fileName name contentUrl encodingFormat
StockInfo.csv StockInfo.csv NA CSV
BroodTables.csv BroodTables.csv NA CSV
SourceInfo.csv SourceInfo.csv NA CSV

attributes.csv has one row for each variable in each file

fileName variableName description unitText
BroodTables.csv Stock.ID Unique stock identifier NA
BroodTables.csv Species species of stock NA
BroodTables.csv Stock Stock name, generally river where stock is found NA
BroodTables.csv Ocean.Region Ocean region NA
BroodTables.csv Region Region of stock NA
BroodTables.csv Sub.Region Sub.Region of stock NA

biblio.csv is one row containing descriptors including spatial and temporal coverage

title description datePublished citation keywords license funder geographicDescription northBoundCoord eastBoundCoord southBoundCoord westBoundCoord wktString startDate endDate
Compiled annual statewide Alaskan salmon escapement counts, 1921-2017 The number of mature salmon migrating from the marine environment to freshwater streams is defined as escapement. Escapement data are the enumeration of these migrating fish as they pass upstream, … 2018-02-12 08:00:00 NA salmon, alaska, escapement NA NA NA 78 -131 47 -171 NA 1921-01-01 08:00:00 2017-01-01 08:00:00

creators.csv has one row for each of the dataset authors

id name affiliation email
NA Jeanette Clark National Center for Ecological Analysis and Synthesis [email protected]
NA Rich,Brenner Alaska Department of Fish and Game richard.brenner.alaska.gov

Save JSON-LD file

write_spice() generates a json-ld file (β€œlinked data”) to aid in dataset discovery, creation of more extensive metadata (e.g.Β EML), and creating a website.

Here’s a view of the dataspice.json file of the example data:

listviewer pack output showing an example dataspice JSON file

Build website

  • build_site() creates a bare-bones index.html file in the repository docs folder with a simple view of the dataset with the metadata and an interactive map. For example, this repository results in this website

dataspice-website

Convert to EML

The metadata fields dataspice uses are based largely on their compatibility with terms from Schema.org. However, dataspice metadata can be converted to Ecological Metadata Language (EML), a much richer schema. The conversion isn’t perfect but dataspice will do its best to convert your dataspice metadata to EML:

library(dataspice)

# Load an example dataspice JSON that comes installed with the package
spice <- system.file(
  "examples", "annual-escapement.json",
  package = "dataspice"
)

# Convert it to EML
eml_doc <- spice_to_eml(spice)
#> Warning: variableMeasured not crosswalked to EML because we don't have enough
#> information. Use `crosswalk_variables` to create the start of an EML attributes
#> table. See ?crosswalk_variables for help.
#> You might want to run EML::eml_validate on the result at this point and fix what validations errors are produced. You will commonly need to set `packageId`, `system`, and provide `attributeList` elements for each `dataTable`.

You may receive warnings depending on which dataspice fields you filled in and this process will very likely produce an invalid EML record which is totally fine:

library(EML)
#> 
#> Attaching package: 'EML'
#> The following object is masked from 'package:magrittr':
#> 
#>     set_attributes

eml_validate(eml_doc)
#> [1] FALSE
#> attr(,"errors")
#> [1] "Element '{https://eml.ecoinformatics.org/eml-2.2.0}eml': The attribute 'packageId' is required but missing."                                  
#> [2] "Element '{https://eml.ecoinformatics.org/eml-2.2.0}eml': The attribute 'system' is required but missing."                                     
#> [3] "Element 'dataTable': Missing child element(s). Expected is one of ( physical, coverage, methods, additionalInfo, annotation, attributeList )."
#> [4] "Element 'dataTable': Missing child element(s). Expected is one of ( physical, coverage, methods, additionalInfo, annotation, attributeList )."
#> [5] "Element 'dataTable': Missing child element(s). Expected is one of ( physical, coverage, methods, additionalInfo, annotation, attributeList )."

This is because some fields in dataspice store information in different structures and because EML requires many fields that dataspice doesn’t have fields for. At this point, you should look over the validation errors produced by EML::eml_validate and fix those. Note that this will likely require familiarity with the EML Schema and the EML package.

Once you’re done, you can write out an EML XML file:

out_path <- tempfile()
write_eml(eml_doc, out_path)
#> NULL

Convert from EML

Like converting dataspice to EML, we can convert an existing EML record to a set of dataspice metadata tables which we can then work from within dataspice:

library(EML)

eml_path <- system.file("example-dataset/broodTable_metadata.xml", package = "dataspice")
eml <- read_eml(eml_path)
# Creates four CSVs files in the `data/metadata` directory
my_spice <- eml_to_spice(eml, "data/metadata")

Resources

A few existing tools & data standards to help users in specific domains:

…And others indexed in Fairsharing.org & the RDA metadata directory.

Code of Conduct

Please note that this package is released with a Contributor Code of Conduct. By contributing to this project, you agree to abide by its terms.

Contributors

This package was developed at rOpenSci’s 2018 unconf by (in alphabetical order):

More Repositories

1

drake

An R-focused pipeline toolkit for reproducibility and high-performance computing
R
1,339
star
2

skimr

A frictionless, pipeable approach to dealing with summary statistics
HTML
1,108
star
3

targets

Function-oriented Make-like declarative workflows for R
R
912
star
4

rtweet

🐦 R client for interacting with Twitter's [stream and REST] APIs
R
785
star
5

tabulizer

Bindings for Tabula PDF Table Extractor Library
R
518
star
6

pdftools

Text Extraction, Rendering and Converting of PDF Documents
C++
489
star
7

magick

Magic, madness, heaven, sin
R
440
star
8

visdat

Preliminary Exploratory Visualisation of Data
R
439
star
9

stplanr

Sustainable transport planning with R
R
417
star
10

RSelenium

An R client for Selenium Remote WebDriver
R
332
star
11

rnoaa

R interface to many NOAA data APIs
R
328
star
12

osmdata

R package for downloading OpenStreetMap data
R
315
star
13

charlatan

Create fake data in R
R
291
star
14

software-review

rOpenSci Software Peer Review.
R
279
star
15

iheatmapr

Complex, interactive heatmaps in R
R
259
star
16

taxize

A taxonomic toolbelt for R
R
250
star
17

rrrpkg

Use of an R package to facilitate reproducible research
248
star
18

elastic

R client for the Elasticsearch HTTP API
R
244
star
19

tesseract

Bindings to Tesseract OCR engine for R
R
236
star
20

git2r

R bindings to the libgit2 library
R
216
star
21

qualtRics

Download ⬇️ Qualtrics survey data directly into R!
R
215
star
22

biomartr

Genomic Data Retrieval with R
R
212
star
23

writexl

Portable, light-weight data frame to xlsx exporter for R
C
202
star
24

googleLanguageR

R client for the Google Translation API, Google Cloud Natural Language API and Google Cloud Speech API
HTML
194
star
25

rnaturalearth

An R package to hold and facilitate interaction with natural earth map data 🌍
R
191
star
26

textreuse

Detect text reuse and document similarity
R
188
star
27

piggyback

πŸ“¦ for using large(r) data files on GitHub
R
182
star
28

tokenizers

Fast, Consistent Tokenization of Natural Language Text
R
179
star
29

rentrez

talk with NCBI entrez using R
R
178
star
30

rcrossref

R client for various CrossRef APIs
R
166
star
31

osmextract

Download and import OpenStreetMap data from Geofabrik and other providers
R
166
star
32

rgbif

Interface to the Global Biodiversity Information Facility API
R
155
star
33

tic

Tasks Integrating Continuously: CI-Agnostic Workflow Definitions
R
153
star
34

webchem

Chemical Information from the Web
R
149
star
35

geojsonio

Convert many data formats to & from GeoJSON & TopoJSON
R
148
star
36

tsbox

tsbox: Class-Agnostic Time Series in R
R
148
star
37

MODIStsp

An "R" package for automatic download and preprocessing of MODIS Land Products Time Series
R
147
star
38

ghql

GraphQL R client
R
145
star
39

DataPackageR

An R package to enable reproducible data processing, packaging and sharing.
R
145
star
40

dev_guide

rOpenSci Packages: Development, Maintenance, and Peer Review
R
141
star
41

osfr

R interface to the Open Science Framework (OSF)
R
140
star
42

jqr

R interface to jq
R
139
star
43

tarchetypes

Archetypes for targets and pipelines
R
130
star
44

osmplotr

Data visualisation using OpenStreetMap objects
R
130
star
45

opencv

R bindings for OpenCV
C++
130
star
46

ssh

Native SSH client in R based on libssh
C
126
star
47

RefManageR

R package RefManageR
R
114
star
48

ezknitr

Avoid the typical working directory pain when using 'knitr'
R
112
star
49

spocc

Species occurrence data toolkit for R
R
109
star
50

hunspell

High-Performance Stemmer, Tokenizer, and Spell Checker for R
C++
106
star
51

weathercan

R package for downloading weather data from Environment and Climate Change Canada
R
102
star
52

crul

R6 based http client for R (for developers)
R
102
star
53

UCSCXenaTools

πŸ“¦ An R package for accessing genomics data from UCSC Xena platform, from cancer multi-omics to single-cell RNA-seq https://cran.r-project.org/web/packages/UCSCXenaTools/
R
102
star
54

gistr

Interact with GitHub gists from R
R
101
star
55

spelling

Tools for Spell Checking in R
R
101
star
56

rfishbase

R interface to the fishbase.org database
R
100
star
57

gutenbergr

Search and download public domain texts from Project Gutenberg
R
99
star
58

git2rdata

An R package for storing and retrieving data.frames in git repositories.
R
99
star
59

openalexR

Getting bibliographic records from OpenAlex
R
98
star
60

bib2df

Parse a BibTeX file to a tibble
R
97
star
61

ckanr

R client for the CKAN API
R
97
star
62

nasapower

API Client for NASA POWER Global Meteorology, Surface Solar Energy and Climatology in R
R
96
star
63

rsvg

SVG renderer for R based on librsvg2
C
95
star
64

EML

Ecological Metadata Language interface for R: synthesis and integration of heterogenous data
R
94
star
65

FedData

Functions to Automate Downloading Geospatial Data Available from Several Federated Data Sources
R
94
star
66

cyphr

:shipit: Humane encryption
R
93
star
67

GSODR

API Client for Global Surface Summary of the Day (GSOD) Weather Data Client in R
R
90
star
68

mapscanner

R package to print maps, draw on them, and scan them back in
R
88
star
69

av

Working with Video in R
C
88
star
70

opencage

🌐 R package for the OpenCage API -- both forward and reverse geocoding 🌐
R
87
star
71

gittargets

Data version control for reproducible analysis pipelines in R with {targets}.
R
85
star
72

tidync

NetCDF exploration and data extraction
R
85
star
73

historydata

Datasets for Historians
R
83
star
74

rzmq

R package for ZMQ
C++
82
star
75

CoordinateCleaner

Automated flagging of common spatial and temporal errors in biological and palaeontological collection data, for the use in conservation, ecology and palaeontology.
HTML
79
star
76

rebird

Wrapper to the eBird API
R
79
star
77

smapr

An R package for acquisition and processing of NASA SMAP data
R
79
star
78

bikedata

🚲 Extract data from public hire bicycle systems
R
79
star
79

dittodb

dittodb: A Test Environment for DB Queries in R
R
78
star
80

arkdb

Archive and unarchive databases as flat text files
R
78
star
81

fingertipsR

R package to interact with Public Health England’s Fingertips data tool
R
78
star
82

vcr

Record HTTP calls and replay them
R
77
star
83

nodbi

Document DBI connector for R
R
76
star
84

opentripplanner

An R package to set up and use OpenTripPlanner (OTP) as a local or remote multimodal trip planner.
R
73
star
85

nlrx

nlrx NetLogo R
R
71
star
86

slopes

Package to calculate slopes of roads, rivers and trajectories
R
70
star
87

tidyhydat

An R package to import Water Survey of Canada hydrometric data and make it tidy
R
70
star
88

rb3

A bunch of downloaders and parsers for data delivered from B3
R
69
star
89

robotstxt

robots.txt file parsing and checking for R
R
68
star
90

codemetar

an R package for generating and working with codemeta
R
66
star
91

tradestatistics

R package to access Open Trade Statistics API
R
65
star
92

unconf17

Website for 2017 rOpenSci Unconf
JavaScript
64
star
93

roadoi

Use Unpaywall with R
R
64
star
94

terrainr

Get DEMs and orthoimagery from the USGS National Map, georeference your images and merge rasters, and visualize with Unity 3D
R
64
star
95

tiler

Generate geographic and non-geographic map tiles from R
R
64
star
96

comtradr

Functions for Interacting with the UN Comtrade API
R
64
star
97

NLMR

πŸ“¦ R package to simulate neutral landscape models πŸ”
R
63
star
98

parzer

Parse geographic coordinates
R
63
star
99

rWBclimate

R interface for the World Bank climate data
R
62
star
100

stats19

R package for working with open road traffic casualty data from Great Britain
R
61
star