• Stars
    star
    241
  • Rank 166,752 (Top 4 %)
  • Language
    JavaScript
  • Created over 9 years ago
  • Updated over 9 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Scripts for generating Grafana dashboards for monitoring Spark jobs

grafana-spark-dashboards

This repository contains a Grafana "scripted dashboard", spark.js, designed to display metrics collected from Spark applications. You can read more about the background and motivation here.

What You'll See

Beautiful graphs of all of your Spark metrics!

Screenshot of Spark metrics dashboard

What's Under the Hood

Here's a diagram of most of the pieces involved in our Spark-on-YARN + Graphite + Grafana infrastructure that contributes to the above graphs:

Gliffy diagram of Spark metrics infrastructure

Installation

There are several pieces that need to be installed and made to talk to each other here:

  1. Install Graphite.
  2. Configure Spark to send metrics to your Graphite.
  3. Install Grafana with your Graphite as a data source.
  4. Install your scripted dashboard in your Grafana installation (don't worry; just a symlink).
  5. Configure your scripted dashboard (don't worry; just a hostname find&replace).

Each of these steps is at least briefly discussed below.

Install Graphite

This can be an arduous process, but try following the instructions at the Graphite docs or in the various guides around the internet.

Configure Spark to Send Metrics to Graphite.

This StackOverflow answer that I wrote explains the process for configuring Spark to send metrics to Graphite.

Alternatively, you can modify your metrics.properties under conf folder on each node, then you can submit your application without append such --conf --file arguments.

Install and Configure Grafana

The Grafana docs are pretty good, but a little lacking the "quick start" department. The basic steps you need to follow are:

git clone [email protected]:grafana/grafana.git
cd grafana
ln -s config.sample.js src/config.js  # create src/config.js from the provided sample.
<edit src/config.js: uncomment Graphite section and set the hostname:port to your Graphite's.>

Here is an example src/config.js that I use, with hostnames and ports redacted.

Install and Configure nginx

Again, primary docs are always a good place to go, but here is an example nginx.conf that I use that serves my Grafana files.

Optional: Install and Configure Elasticsearch

If you want to use Grafana's dashboard-saving and -loading functionality, the easiest thing to do is to point it at an elasticsearch instance.

Install Elasticsearch, run it on the default port 9200, and don't delete the elasticsearch portion of the sample src/config.js I showed you.

After the above steps, you should be able to go to you <grafana host>:8090 and see stub "random walk" graphs.

Install Scripted Dashboard in Grafana

This is easy:

ln -s $THIS_REPO/spark.js $GRAFANA_REPO/src/app/dashboards/spark.js

Now you should be able to go to http://:8090/#/dashboard/script/spark.js?app=$YARN_APP_ID&maxExecutorId=$N, substituting values for the URL-params values, and see a Spark dashboard!

If your Spark cluster is a standalone cluster, you can simply go to http://:8090/#/dashboard/script/spark.js?prefix=$APP_ID to see your Spark dashboard.

spark.js URL API

Here are the URL parameters that you can pass to spark.js:

Important / Required Parameters

&app=<YARN app ID>

Using this is highly recommended: any unique substring of a YARN application ID that you can see on your ResourceManager's web UI will do.

For example, to obtain graphs for my latest job shown here:

Yarn ResourceManager screenshot

I can simply pass ?app=0006 to spark.js.

This will hit your ResourceManager's JSON API (via the proxy you've set up on the same host, port 8091), find the application that matches 0006, and pull in:

  • the application ID, which by default is the first segment of all metric names that Spark emits,
  • the start time, and
  • the end time, or a sentinel "now" value if the job is still running.

If you are not specifying the app parameter, then the next three parameters should be included:

&prefix=<metric prefix>

Pass the full application ID (which is the YARN application ID if you are running Spark on YARN, otherwise the spark.app.id configuration param that your Spark job ran with) here if it is not fetched via the app parameter documented above.

&from=YYYYMMDDTHHMMSS, &to=YYYYMMDDTHHMMSS

These will be inferred from the YARN application if the app param is used, otherwise they should be set manually; defaults are now-1h and now.

&maxExecutorId=<N>

Tell spark.js how many per-executor graphs to draw, and how to initialize some sane values of the $executorRange template variable.

Miscellaneous / Optional Parameters

&collapseExecutors=<bool>

Collapse the top row containing per-executor JVM statistics, which can commonly be quite large and take up many folds of screen-height.

Default: true.

&executors=<ranges>

Comma-delimited list of dash-delimited pairs of integers denoting specific executors to show.

All ranges passed here, as well as their union, will be added as options to the $executorRange template variable.

Example: 1-12,22-23.

&sharedTooltip=<bool>

Toggle whether each graph's tooltip shows values for every plotted metric at a given x-axis value or for just a single metric that's being moused over.

Default: true.

&executorLegends=<bool>

Show legends on per-executor graphs.

Default: true.

&legends=<bool>

Show legends on graphs other than per-executor ones discussed above.

Default: false. Many of these panels can plot 100s of executors at the same time, causing the legend to be cumbersome.

&percentilesAndTotals=<bool>

Render nth-percentiles and sums on certain graphs; can slow down rendering.

Default: false.

spark.js Templated Variables

spark.js exposes three templated variables that can be dynamically changed and cause dashboard updates:

spark.js templated variables

  • $prefix: the first piece of your Spark metrics' names; analogous to the prefix URL param.
  • $executorRange: ranges of executors to restrict graphs that plot multiple executors' values of a given metric to.
  • $driver: typically unused; when sending metrics from Spark to Graphite via StatsD, the "driver" identifier can lose its angle-brackets. This variable provides an escape hatch in that situation.

Troubleshooting

Please file issues if you run into any problems, as this is fairly "alpha".

More Repositories

1

pileup.js

Interactive in-browser track viewer
JavaScript
274
star
2

spree

Live-updating Spark UI built with Meteor
JavaScript
189
star
3

survivalstan

Library of Stan Models for Survival Analysis
Jupyter Notebook
123
star
4

cytokit

Microscopy Image Cytometry Toolkit
Jupyter Notebook
115
star
5

ppx_deriving_cmdliner

Ppx_deriving plugin for generating command line interfaces from types (Cmdliner.Term.t)
OCaml
96
star
6

flowdec

TensorFlow Deconvolution for Microscopy Data
Jupyter Notebook
88
star
7

guacamole

Spark-based variant calling, with experimental support for multi-sample somatic calling (including RNA) and local assembly
Scala
83
star
8

ketrew

Keep Track of Experimental Workflows
OCaml
76
star
9

yarn-logs-helpers

Scripts for parsing / making sense of yarn logs
Shell
52
star
10

genspio

Generate Shell Phrases In OCaml
OCaml
48
star
11

dask-distributed-on-kubernetes

Deploy dask-distributed on google container engine using kubernetes
Jupyter Notebook
40
star
12

data-canvas

Improved event handling and testing for the HTML5 canvas
JavaScript
38
star
13

cycledash

Variant Caller Analysis Dashboard and Data Management System
Python
35
star
14

prohlatype

Probabilistic HLA typing
OCaml
35
star
15

kubeface

python parallel map on kubernetes
Python
34
star
16

epidisco

Personalized cancer epitope discovery and peptide vaccine prediction pipeline
OCaml
30
star
17

sosa

The Sane OCaml String API
OCaml
27
star
18

biokepi

Bioinformatics Ketrew Pipelines
OCaml
27
star
19

spark-tests

Utilities for writing tests that use Apache Spark.
Scala
24
star
20

t-cell-relation-extraction

Literature mining for T cell relations
Jupyter Notebook
23
star
21

multi-omic-urothelial-anti-pdl1

Contribution of systemic and somatic factors to clinical response and resistance in urothelial cancer: an exploratory multi-omic analysis
Jupyter Notebook
22
star
22

vcf.js

A VCF parser and variant record model in JavaScript.
JavaScript
22
star
23

magic-rdds

Miscellaneous functionality for manipulating Apache Spark RDDs.
Scala
22
star
24

cohorts

Utilities for analyzing mutations and neoepitopes in patient cohorts
Python
20
star
25

spark-bam

Load genomic BAM files using Apache Spark
Scala
20
star
26

pygdc

Python API for Genomic Data Commons
Python
18
star
27

concordance

Concordance between variant callers
JavaScript
17
star
28

shapeless-utils

type-classes for structural manipulation of algebraic data types
Scala
17
star
29

bai-indexer

Build an index for your BAM Index (BAI)
Python
17
star
30

spark-json-relay

SparkListener that converts SparkListenerEvents to JSON and forwards them to an external service via RPC.
Scala
17
star
31

coclobas

Configurable Cloudy Batch Scheduler
OCaml
16
star
32

spark-util

low-level helpers for Apache Spark libraries and tests
Scala
16
star
33

t-cell-guide

Human Primary T cells: A Practical Guide
Jupyter Notebook
15
star
34

awesome-clonality

A curated list of awesome clonality and tumor heterogeneity resources
15
star
35

sbt-parent

SBT plugins for publishing to Maven Central, shading and managing dependencies, reporting to Coveralls from TravisCI, and more
Scala
14
star
36

immuno

Use somatic mutations to choose a personalized cancer vaccine (tumor-specific immunogenic peptides)
Python
14
star
37

pageant

Parallel Genomic Analysis Toolkit
14
star
38

seltest

The simple, fast, visual testing framework for web applications.
Python
13
star
39

stanity

python convenience functions for working with Stan models (via pystan)
Python
13
star
40

slim

Node server that listens to Spark events, aggregates statistics, and writes them to Mongo
JavaScript
10
star
41

vaf-experiments

A step-by-step guide to estimate tumor clonality/purity from variant allele frequency data
Jupyter Notebook
8
star
42

style-guides

Guidelines of the Hammer Lab
8
star
43

vcf-annotate-polyphen

A tool to annotate human VCF files with PolyPhen2 effect measures
Python
8
star
44

math-utils

Math and statistics utilities
Scala
7
star
45

hlarp

Normalize HLA typing output.
OCaml
6
star
46

t-cell-data

TeX
6
star
47

iterators

Enrichment-methods for Scala collections (Iterators, Iterables, Arrays)
Scala
6
star
48

infino

Infino: a Bayesian hierarchical model improves estimates of immune infiltration into tumor microenvironment
Jupyter Notebook
6
star
49

kerseq

Helpers for sequence prediction with Keras
Python
5
star
50

secotrec

Setup Coclobas/Ketrew Clusters
OCaml
5
star
51

immune-infiltrate-explorations

Jupyter Notebook
5
star
52

suffix-arrays

Spark-based implementation of pDC3, a linear-time parallel suffix-array-construction algorithm
TypeScript
5
star
53

spark-genomics

Aggregation of various hammerlab-org genomic, spark, and scala libraries
Scala
5
star
54

wobidisco

Workflows Bioinformatics and Discoballs: The Biokepiverse
5
star
55

igv-httpfs

An adaptor which lets IGV talk to HDFS via HttpFS
Python
5
star
56

redaw

Reinvent the Dataset Wheel
OCaml
4
star
57

melanoma-reanalysis

Online Materials: Somatic Mutations, Neoepitope Homology and Inflammation in Melanomas Treated with CTLA-4 Blockade
4
star
58

idiogrammatik

An extensible, embeddable karyogram for the browser.
JavaScript
4
star
59

cli-utils

Helpers for creating command-line applications
Scala
3
star
60

topeology

Compare neoepitope sequences with epitopes from IEDB
Python
3
star
61

stratotemplate

DEPRECATED: we don't really maintain this any more, we use Coclobas:
OCaml
3
star
62

ngsdiagnostics

Diagnostic Scripts for an NGS Pipeline
Python
3
star
63

ogene

Type-safe scripts for genomic file wrangling
OCaml
3
star
64

bespoke.js

Parsers and fetchers for a cornucopia of bioinformatics formats
JavaScript
3
star
65

mhcflurry-icml-compbio-2016

Data and analysis notebooks for Predicting Peptide-MHC Binding Affinities With Imputed Training Data
Jupyter Notebook
3
star
66

rinfino

R client to run infino (http://github.com/hammerlab/infino)
R
2
star
67

coverage-depth

Generate genomic-coverage-depth histograms using Apache Spark
Scala
2
star
68

SmartCount

Repository for collaboration on Celldom computer vision solutions
Jupyter Notebook
2
star
69

igvxml

Create IGV session files from the command-line
OCaml
2
star
70

paper-aocs-chemo-neoantigens

Manuscript on chemotherapy-induced neoantigens in samples from the Australian Ovarian Cancer Study
Jupyter Notebook
2
star
71

bdgenomics-notebook

2
star
72

flusso

FCS (Flow Cytometry Standard) parser and utility
JavaScript
2
star
73

io-utils

Libraries for console/file I/O, processing/formatting sizes in bytes, etc.
Scala
2
star
74

variant-calling-benchmarks

Automated and curated variant calling benchmarks for Guacamole
Jupyter Notebook
2
star
75

stratocumulus

DEPRECATED: we don't really maintain this any more, we use Coclobas:
OCaml
2
star
76

tcga-blca

Example analysis using Cohorts & TCGA-BLCA data
Jupyter Notebook
2
star
77

cvutils

Computer vision utilities
Python
2
star
78

path-utils

Scala convenience-wrapper for java.nio.file.Path
Scala
2
star
79

discohorts

Generate Cohorts based on Epidisco and/or Biokepi results
Python
1
star
80

spear

WIP: SparkListener that maintains info about jobs, stages, tasks, executors, and RDDs in MongoDB.
Scala
1
star
81

pysigs

Mutational signature deconvolution onto known signatures
Python
1
star
82

celldom-analysis

Repository for Celldom experiment analysis and configuration
Jupyter Notebook
1
star
83

t-cell-electroporation

Code/Data repository for "Electroporation characteristics of human primary T cells"
Jupyter Notebook
1
star
84

genomic-loci

Utilities for representing genomic loci and reference-genomes
Scala
1
star
85

stancache

Filecache for stan models
Python
1
star
86

genomic-reads

Library for representing and working with genomic-sequencing reads.
Scala
1
star
87

avm

Arteriovenous malformations
Python
1
star
88

epidisco-web

Web interface to easily describe and submit epidisco jobs
JavaScript
1
star
89

nosoi

Exploration of evolutionary signatures within viral proteomes by making use of MHC binding predictions
Perl
1
star
90

string-utils

String/CSV utilities
Scala
1
star