• Stars
    star
    6,414
  • Rank 6,203 (Top 0.2 %)
  • Language
  • Created about 11 years ago
  • Updated about 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

The Leek group guide to data sharing

How to share data with a statistician

This is a guide for anyone who needs to share data with a statistician or data scientist. The target audiences I have in mind are:

  • Collaborators who need statisticians or data scientists to analyze data for them
  • Students or postdocs in various disciplines looking for consulting advice
  • Junior statistics students whose job it is to collate/clean/wrangle data sets

The goals of this guide are to provide some instruction on the best way to share data to avoid the most common pitfalls and sources of delay in the transition from data collection to data analysis. The Leek group works with a large number of collaborators and the number one source of variation in the speed to results is the status of the data when they arrive at the Leek group. Based on my conversations with other statisticians this is true nearly universally.

My strong feeling is that statisticians should be able to handle the data in whatever state they arrive. It is important to see the raw data, understand the steps in the processing pipeline, and be able to incorporate hidden sources of variability in one's data analysis. On the other hand, for many data types, the processing steps are well documented and standardized. So the work of converting the data from raw form to directly analyzable form can be performed before calling on a statistician. This can dramatically speed the turnaround time, since the statistician doesn't have to work through all the pre-processing steps first.

What you should deliver to the statistician

To facilitate the most efficient and timely analysis this is the information you should pass to a statistician:

  1. The raw data.
  2. A tidy data set
  3. A code book describing each variable and its values in the tidy data set.
  4. An explicit and exact recipe you used to go from 1 -> 2,3

Let's look at each part of the data package you will transfer.

The raw data

It is critical that you include the rawest form of the data that you have access to. This ensures that data provenance can be maintained throughout the workflow. Here are some examples of the raw form of data:

  • The strange binary file your measurement machine spits out
  • The unformatted Excel file with 10 worksheets the company you contracted with sent you
  • The complicated JSON data you got from scraping the Twitter API
  • The hand-entered numbers you collected looking through a microscope

You know the raw data are in the right format if you:

  1. Ran no software on the data
  2. Did not modify any of the data values
  3. You did not remove any data from the data set
  4. You did not summarize the data in any way

If you made any modifications of the raw data it is not the raw form of the data. Reporting modified data as raw data is a very common way to slow down the analysis process, since the analyst will often have to do a forensic study of your data to figure out why the raw data looks weird. (Also imagine what would happen if new data arrived?)

The tidy data set

The general principles of tidy data are laid out by Hadley Wickham in this paper and this video. While both the paper and the video describe tidy data using R, the principles are more generally applicable:

  1. Each variable you measure should be in one column
  2. Each different observation of that variable should be in a different row
  3. There should be one table for each "kind" of variable
  4. If you have multiple tables, they should include a column in the table that allows them to be joined or merged

While these are the hard and fast rules, there are a number of other things that will make your data set much easier to handle. First is to include a row at the top of each data table/spreadsheet that contains full row names. So if you measured age at diagnosis for patients, you would head that column with the name AgeAtDiagnosis instead of something like ADx or another abbreviation that may be hard for another person to understand.

Here is an example of how this would work from genomics. Suppose that for 20 people you have collected gene expression measurements with RNA-sequencing. You have also collected demographic and clinical information about the patients including their age, treatment, and diagnosis. You would have one table/spreadsheet that contains the clinical/demographic information. It would have four columns (patient id, age, treatment, diagnosis) and 21 rows (a row with variable names, then one row for every patient). You would also have one spreadsheet for the summarized genomic data. Usually this type of data is summarized at the level of the number of counts per exon. Suppose you have 100,000 exons, then you would have a table/spreadsheet that had 21 rows (a row for gene names, and one row for each patient) and 100,001 columns (one row for patient ids and one row for each data type).

If you are sharing your data with the collaborator in Excel, the tidy data should be in one Excel file per table. They should not have multiple worksheets, no macros should be applied to the data, and no columns/cells should be highlighted. Alternatively share the data in a CSV or TAB-delimited text file. (Beware however that reading CSV files into Excel can sometimes lead to non-reproducible handling of date and time variables.)

The code book

For almost any data set, the measurements you calculate will need to be described in more detail than you can or should sneak into the spreadsheet. The code book contains this information. At minimum it should contain:

  1. Information about the variables (including units!) in the data set not contained in the tidy data
  2. Information about the summary choices you made
  3. Information about the experimental study design you used

In our genomics example, the analyst would want to know what the unit of measurement for each clinical/demographic variable is (age in years, treatment by name/dose, level of diagnosis and how heterogeneous). They would also want to know how you picked the exons you used for summarizing the genomic data (UCSC/Ensembl, etc.). They would also want to know any other information about how you did the data collection/study design. For example, are these the first 20 patients that walked into the clinic? Are they 20 highly selected patients by some characteristic like age? Are they randomized to treatments?

A common format for this document is a Word file. There should be a section called "Study design" that has a thorough description of how you collected the data. There is a section called "Code book" that describes each variable and its units.

How to code variables

When you put variables into a spreadsheet there are several main categories you will run into depending on their data type:

  1. Continuous
  2. Ordinal
  3. Categorical
  4. Missing
  5. Censored

Continuous variables are anything measured on a quantitative scale that could be any fractional number. An example would be something like weight measured in kg. Ordinal data are data that have a fixed, small (< 100) number of levels but are ordered. This could be for example survey responses where the choices are: poor, fair, good. Categorical data are data where there are multiple categories, but they aren't ordered. One example would be sex: male or female. This coding is attractive because it is self-documenting. Missing data are data that are unobserved and you don't know the mechanism. You should code missing values as NA. Censored data are data where you know the missingness mechanism on some level. Common examples are a measurement being below a detection limit or a patient being lost to follow-up. They should also be coded as NA when you don't have the data. But you should also add a new column to your tidy data called, "VariableNameCensored" which should have values of TRUE if censored and FALSE if not. In the code book you should explain why those values are missing. It is absolutely critical to report to the analyst if there is a reason you know about that some of the data are missing. You should also not impute/make up/ throw away missing observations.

In general, try to avoid coding categorical or ordinal variables as numbers. When you enter the value for sex in the tidy data, it should be "male" or "female". The ordinal values in the data set should be "poor", "fair", and "good" not 1, 2 ,3. This will avoid potential mixups about which direction effects go and will help identify coding errors.

Always encode every piece of information about your observations using text. For example, if you are storing data in Excel and use a form of colored text or cell background formatting to indicate information about an observation ("red variable entries were observed in experiment 1.") then this information will not be exported (and will be lost!) when the data is exported as raw text. Every piece of data should be encoded as actual text that can be exported.

The instruction list/script

You may have heard this before, but reproducibility is a big deal in computational science. That means, when you submit your paper, the reviewers and the rest of the world should be able to exactly replicate the analyses from raw data all the way to final results. If you are trying to be efficient, you will likely perform some summarization/data analysis steps before the data can be considered tidy.

The ideal thing for you to do when performing summarization is to create a computer script (in R, Python, or something else) that takes the raw data as input and produces the tidy data you are sharing as output. You can try running your script a couple of times and see if the code produces the same output.

In many cases, the person who collected the data has incentive to make it tidy for a statistician to speed the process of collaboration. They may not know how to code in a scripting language. In that case, what you should provide the statistician is something called pseudocode. It should look something like:

  1. Step 1 - take the raw file, run version 3.1.2 of summarize software with parameters a=1, b=2, c=3
  2. Step 2 - run the software separately for each sample
  3. Step 3 - take column three of outputfile.out for each sample and that is the corresponding row in the output data set

You should also include information about which system (Mac/Windows/Linux) you used the software on and whether you tried it more than once to confirm it gave the same results. Ideally, you will run this by a fellow student/labmate to confirm that they can obtain the same output file you did.

What you should expect from the analyst

When you turn over a properly tidied data set it dramatically decreases the workload on the statistician. So hopefully they will get back to you much sooner. But most careful statisticians will check your recipe, ask questions about steps you performed, and try to confirm that they can obtain the same tidy data that you did with, at minimum, spot checks.

You should then expect from the statistician:

  1. An analysis script that performs each of the analyses (not just instructions)
  2. The exact computer code they used to run the analysis
  3. All output files/figures they generated.

This is the information you will use in the supplement to establish reproducibility and precision of your results. Each of the steps in the analysis should be clearly explained and you should ask questions when you don't understand what the analyst did. It is the responsibility of both the statistician and the scientist to understand the statistical analysis. You may not be able to perform the exact analyses without the statistician's code, but you should be able to explain why the statistician performed each step to a labmate/your principal investigator.

Contributors

More Repositories

1

dataanalysis

The lecture slides for Coursera's Data Analysis class
JavaScript
754
star
2

rpackages

R package development - the Leek group way!
513
star
3

genomicspapers

The Leek group guide to genomics papers
452
star
4

reviews

Writing reviews of academic papers
444
star
5

readingpapers

A guide to reading scientific papers
444
star
6

firstpaper

286
star
7

talkguide

The Leek Group Guide to Giving Talks
255
star
8

capitalIn21stCenturyinR

Piketty in R
HTML
212
star
9

genstats

Statistics course for JHU Genomic Data Science Sequence
HTML
142
star
10

careerplanning

A career planning guide.
118
star
11

slipper

Tidy and easy bootstrapping
R
116
star
12

modules

JavaScript
96
star
13

tidypvals

An R package with several million published p-values in tidy data sets.
HTML
74
star
14

ads2020

Advanced Data Science 2020 Edition
CSS
73
star
15

futureofstats

Take Homes from the Unconference on the Future of Statistics #futureofstats
33
star
16

sva-devel

R
28
star
17

swfdr

R code for calculating the Science-wise False Discovery Rate
R
26
star
18

papr

Paper app
HTML
19
star
19

svaseq

Analysis for svaseq paper
17
star
20

genstats_site

Site for Genomic Data Science Class
HTML
16
star
21

advdatasci15

Advanced Data Science @ JHU Biostats
HTML
16
star
22

jtleek.github.io

Website
HTML
15
star
23

jhsph753and4

Class github repository for 751 and 2; doctoral classes in the Department of Biostatistics at Johns Hopkins
JavaScript
14
star
24

courses

Courses taught by Jeff
14
star
25

protocols

This will be a directory of lab analysis protocols.
HTML
13
star
26

data

Data resources created by the Leek group
11
star
27

talks

Slides from presentations
11
star
28

leekasso

Code for comparing the top 10 predictors to the lasso/debiased lasso
R
11
star
29

books

Books by Jeff Leek
11
star
30

jobs

Jobs
10
star
31

datascientist

datascientist
R
10
star
32

gdspi

Genomic Data Science for PIs Curriculum Outline
9
star
33

intro-ml-2018

HTML
8
star
34

healthvis

An Interactive Health Visualization Package
Python
8
star
35

escalatr

A package for making R markdown websites.
7
star
36

advdatasci16

HTML
7
star
37

datawomenontwitter

A list of women doing great data things on Twitter (started here:http://simplystatistics.org/2014/09/09/a-non-comprehensive-list-of-awesome-female-data-people-on-twitter/)
7
star
38

simplystats

R
6
star
39

cshlcg-labs

Cold Spring Harbor Labs Computational Genomics
6
star
40

advdatasci_swirl

HTML
5
star
41

ai

A few AI resources that I've found interesting or that we are working on
5
star
42

software

Leek group software
4
star
43

tspreg

An R package for performing top-scoring pairs regression.
R
4
star
44

jhsph-irb-research-plan-template

JHSPH IRB form
4
star
45

advdatasci-swirl

HTML
4
star
46

googleCite

googleCite is a function for creating a wordcloud of your google scholar citations page.
4
star
47

replication_paper

Replication paper
HTML
3
star
48

sva

This is a read-only mirror of the Bioconductor SVN repository. Package Homepage: http://bioconductor.org/packages/devel/bioc/html/sva.html Bug Reports: https://support.bioconductor.org/p/new/post/?tag_val=sva.
R
3
star
49

graduate

3
star
50

testrepository

testrepository
3
star
51

svaruv

2
star
52

advdatasci-project

Awesome project!
HTML
2
star
53

jhsph753

Web page for JHSPH Advanced Methods/Applied Statistics
JavaScript
2
star
54

sisg

SISG Module 6
HTML
2
star
55

practicecourse

Practice course for CDS
1
star
56

newproject

This is my new project.
1
star
57

simplystats_analysis

Wrapping up!
R
1
star
58

gcd

Getting and cleaning data reboot
1
star
59

hr-in-ds

A collaborative white paper on challenges and opportunities with human resources for data science positions
1
star
60

portfolio

This is my Data Science Specialization Portfolio
1
star
61

jhudash-refugee

Code to collect data for the #jhudash refugee project
HTML
1
star
62

iap

This is the repository for the inference after prediction package
R
1
star
63

rfitbit

An R package to download and play with fitbit data
1
star
64

inclassfeb62014

In class project repo
Shell
1
star
65

sisbid-rstudio

1
star
66

alg-fairness-app-wireframe

Shiny app wireframe
1
star
67

rdsmGeneSig

A deterministic statistical machine (http://simplystatistics.org/2012/08/27/a-deterministic-statistical-machine/) for calculating and validating a gene signature.
R
1
star