• Stars
    star
    338
  • Rank 124,931 (Top 3 %)
  • Language
    R
  • License
    Other
  • Created about 10 years ago
  • Updated over 6 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Make-like declarative workflows in R

remake

Build Status Build status Coverage Status

Make-like build management, reimagined for R.

See below for installation instructions.

The idea

"make", when it works, is wonderful. Being able to change part of a complicated system and the re-make, updating only the parts of the system that have changed is great. While it gets some use It's very heavily tailored towards building software though. While make can be used to create reproducible research workflows (e.g. here and here), it is a challenge.

The idea here is to re-imagine a set of ideas from make but built for R. Rather than having a series of calls to different instances of R (as happens if you run make on R scripts), the idea is to define pieces of a pipeline within an R session. Rather than being language agnostic (like make must be), remake is unapologetically R focussed.

Note: This package is under heavy development (as of May 2015), so things may change under you if you start using this now. However, the core format seems to be working on some nontrivial cases that we are using in our own work. At the same time, if you're willing to have things change around a bit feel free to start using this and post issues with problems/friction/ideas etc and the package will reflect your workflow more.

Note: Between versions 0.1 and 0.2.0 the database format has changed. This will require rebuilding your project. This corresponds to adding the dependency on storr. Everything else should remain unchanged though.

What remake does

You describe the beginning, intermediate and end points of your analysis, and how they flow together.

  • "targets" are the points in your analysis. They can be either files (data files for input; tables, plots, knitr reports for output) or they can be R objects (representing processed data, results, fitted models, etc).
  • "rules" are how the targets in your analysis relate together and are simply the names of R functions.
  • "dependencies" are the targets that need to already be made before a particular target can run (for example, a processed data set might depend on downloading a file; a plot might depend on a processed data set).

There might be very few steps or very many, but remake will take care of stepping through the analysis in a correct order (there can be more than one correct order!).

Example

Here's a very simple analysis pipeline that illustrates the basic idea:

  1. Download some data from the web into a local file
  2. Process that file to prepare it for analysis
  3. Create a plot from that file
  4. Create a knitr report that uses the same set of objects

The remakefile that describes this pipline might look like this:

sources:
  - code.R

targets:
  all:
    depends: plot.pdf

  data.csv:
    command: download_data(target_name)

  processed:
    command: process_data("data.csv")

  plot.pdf:
    command: myplot(processed)
    plot: true

  report.md:
    depends: processed
    knitr: true

(this is a yaml file). The full version of this file, with explanations, is here.

You still need to write functions that carry out each step; that might look something like this, but it would define the functions download_data, processs_data and myplot. Remake can then be run from within R:

remake::make()
# [ BUILD ] data.csv            |  download_data("data.csv")
# [ BUILD ] processed           |  processed <- process_data("data.csv")
# [ BUILD ] plot.pdf            |  myplot(processed) # ==> plot.pdf
# [       ] all

The "BUILD": next to each target indicates that it is being run (which may take some time for a complicated step) and after the pipe a call is printed that indicates what is happening (this is a small white lie).

Rerunning remake:

remake::make()
# [    OK ] data.csv
# [    OK ] processed
# [    OK ] plot.pdf
# [       ] all

Everything is up to date, so remake just skips over things.

There are also special knitr targets:

remake::make("report.md")
# [    OK ] data.csv
# [    OK ] processed
# [       ] report.Rmd
# [  KNIT ] report.md            |  knitr::knit("report.Rmd", "report.md")

This arranges for the target processed, on which this depends (see the remakefile) to be passed through to knitr, along with all the functions defined in code.R, and builds the report report.md from the knitr source report.Rmd (the source is here). Note that because processed was already up to date, remake skips rebuilding it.

remake can also be run from the command line (outside of R), to make it easy to include as part of a bigger pipeline, perhaps using make! (I do this in my own use of remake).

Rather than require that you buy in to some all-singing, all-dancing workflow tool, remake tries to be agnostic about how you work: there are no special functions within your code that you need to use. You can also create a linear version of your analysis at any time:

remake::make_script()
# source("code.R")
# download_data("data.csv")
# processed <- process_data("data.csv")
# pdf("plot.pdf")
# myplot(processed)
# dev.off()

Other core features:

  • remake determines if any dependencies have changed when need running your analysis. So if a downloaded data file changes, everything that depends on it will be rebuilt when needed.
    • however, rather than rely on file modification times, remake uses a hash (like a digital fingerprint) of the file or object to determine if the contents have really changed. So inconsequential changes will be ignored.
  • Object targets are persistent across sessions, so manual saving of objects is not necessary. This avoids a lot of manual caching of results that tends to litter long-running analysis code.
  • remake also checks if the functions used as rules (or called from those functions) have changed and will rebuild if these have changed (for the rationale here, see here).
  • Because remake keeps track of which files and objects it created, it can automatically clean up after itself. This makes it easy to rerun the entire analysis beginning-to-end.
    • three levels of cleaning (tidy, clean and purge) are provided to categorise how much you want to keep a particular target.
  • Support for easily making figures and running knitr as special targets.
  • Automate installation of dependencies.
  • Automate curation of .gitignore files to prevent accidentally committing large output to your repository.
  • (Very early) support for archiving a set of analyses that other users can import.
    • This means you can share results of long-running analyses/simulations and people can easily run the last steps of your analyses.
    • Eventually this will interface with rfigshare and GitHub releases.

Real-world examples

Tutorials

Some tutorials on using remake with different datasets.

Installation

Install using devtools:

devtools::install_github("richfitz/remake")

If you don't have devtools installed you will see an error "there is no package called 'devtools'"; if that happens install devtools with install.packages("devtools").

remake depends on several R packages, all of which can be installed from CRAN. The required packages are:

  • R6 for holding things together
  • yaml for reading the configuration
  • digest for efficiently hashing objects
  • crayon for coloured output on the terminal (not in Rstudio or Rgui)
  • optparse for a command line version (run from outside of an R session)
install.packages(c("R6", "yaml", "digest", "crayon", "optparse"))

We also depend on storr for object storage:

devtools::install_github("richfitz/storr")

More Repositories

1

stevedore

☁️🚣🐳☁️ Docker client for R
R
134
star
2

storr

πŸ“¦ Object cacher for R
R
117
star
3

redux

πŸ“žπŸ’» Redis client for R
R
84
star
4

RcppR6

Code-generation wrapping C++ classes as R6 classes
R
54
star
5

thor

βš‘πŸ’»βš‘ R client for the Lightning Memory-Mapped Database
C
52
star
6

remoji

Emoji for R 😹
R
41
star
7

stegasaur

πŸ¦• Steganography for R
R
33
star
8

diversitree

diversitree: comparative phylogenetic analyses of diversification
R
27
star
9

reproducibility-2014

Talk on reproducible research at SURF: http://www.meetup.com/R-Users-Sydney/events/199742542/
TeX
27
star
10

pathr

Port of Python's os.path
R
19
star
11

wood

How much of the world is woody?
R
19
star
12

drat.builder

Build tools for a drat
R
16
star
13

rfiglet

R port of figlet
R
16
star
14

rainbowrite

🌈 Port of lolcat to R 🌈
R
14
star
15

forest

New Phylogenetics Data Structures in R
C++
10
star
16

sowsear

Frictionless literate programming: generate knitr files from R scripts
R
9
star
17

toxiproxyr

β˜£β˜£β˜£πŸ“ˆβ˜£β˜£β˜£ R interface to toxiproxy
R
7
star
18

rodeint

Interface to boost's odeint-v2
C++
7
star
19

jiffy

R interface to giphy πŸƒ Pronounced "yiffy"
R
5
star
20

thesis-style

Thesis style, vaguely suitable for UBC
TeX
5
star
21

rleveldb

🏒 LevelDB interface for R
R
4
star
22

remotefile

β˜οΈπŸ“β˜οΈ Download files when needed
R
4
star
23

httppipe

πŸ“  HTTP over named pipes (experimental and not very good)
Python
3
star
24

seagull

🐦 Portable file locking
R
2
star
25

rc2016

Notes on mixing R and C/C++
C++
2
star
26

unpack

πŸŽ’ ➑️ πŸŒ΄πŸ§πŸ„πŸ‘½ Unpack RDS objects
C
2
star
27

git-intro

Material for teaching git
2
star
28

vectoR

Vector image utilities for R
R
2
star
29

cppinfo

Extract C++ type information for use with R
R
2
star
30

mres-2022

git example
1
star
31

emacs

Simple configuration for emacs
Emacs Lisp
1
star
32

f5-steganography

Automatically exported from code.google.com/p/f5-steganography
Java
1
star
33

data

A test repository for datastorr
1
star
34

storr.server

R
1
star
35

kitten

(this was just a demo package for teaching packaging!)
R
1
star
36

R_practical_III

R
1
star
37

datastorr.example

R
1
star
38

advent-of-rust

Rust
1
star
39

rcmdshlib

πŸ‘· Build shared libraries for R, from R
R
1
star
40

simplemcmc

C++
1
star
41

yvr-2016

TeX
1
star
42

git-training

R
1
star
43

loggr.redis

Redis support for loggr
R
1
star