• Stars
    star
    131
  • Rank 275,867 (Top 6 %)
  • Language
    Java
  • License
    BSD 2-Clause "Sim...
  • Created about 7 years ago
  • Updated 11 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

A tool for scale and performance testing of HDFS with a specific focus on the NameNode.

Dynamometer Build Status

Dynamometer in Hadoop

Please be aware that Dynamometer has now been committed into Hadoop itself in the JIRA ticket HDFS-12345. It is located under the hadoop-tools/hadoop-dynamometer submodule. This GitHub project will continue to be maintained for testing against the 2.x release line of Hadoop, but all versions of Dynamometer which work with Hadoop 3 will only appear in Hadoop, and future development will primarily occur there.

Overview

Dynamometer is a tool to performance test Hadoop's HDFS NameNode. The intent is to provide a real-world environment by initializing the NameNode against a production file system image and replaying a production workload collected via e.g. the NameNode's audit logs. This allows for replaying a workload which is not only similar in characteristic to that experienced in production, but actually identical.

Dynamometer will launch a YARN application which starts a single NameNode and a configurable number of DataNodes, simulating an entire HDFS cluster as a single application. There is an additional workload job run as a MapReduce job which accepts audit logs as input and uses the information contained within to submit matching requests to the NameNode, inducing load on the service.

Dynamometer can execute this same workload against different Hadoop versions or with different configurations, allowing for the testing of configuration tweaks and code changes at scale without the necessity of deploying to a real large-scale cluster.

Throughout this documentation, we will use "Dyno-HDFS", "Dyno-NN", and "Dyno-DN" to refer to the HDFS cluster, NameNode, and DataNodes (respectively) which are started inside of a Dynamometer application. Terms like HDFS, YARN, and NameNode used without qualification refer to the existing infrastructure on top of which Dynamometer is run.

Requirements

Dynamometer is based around YARN applications, so an existing YARN cluster will be required for execution. It also requires an accompanying HDFS instance to store some temporary files for communication.

Please be aware that Dynamometer makes certain assumptions about HDFS, and thus only works with certain versions. As discussed at the start of this README, this project only works with Hadoop 2; support for Hadoop 3 is introduced in the version of Dynamometer within the Hadoop repository. Below is a list of known supported versions of Hadoop which are compatible with Dynamometer:

  • Hadoop 2.7 starting at 2.7.4
  • Hadoop 2.8 starting at 2.8.4

Hadoop 2.8.2 and 2.8.3 are compatible as a cluster version on which to run Dynamometer, but are not supported as a version-under-test.

Building

Dynamometer consists of three main components:

  • Infrastructure: This is the YARN application which starts a Dyno-HDFS cluster.
  • Workload: This is the MapReduce job which replays audit logs.
  • Block Generator: This is a MapReduce job used to generate input files for each Dyno-DN; its execution is a prerequisite step to running the infrastructure application.

They are built through standard Gradle means, i.e. ./gradlew build. This project uses the Gradle wrapper In addition to compiling everything, this will generate a distribution tarball, containing all necessary components for an end user, at build/distributions/dynamometer-VERSION.tar (a zip is also generated; their contents are identical). This distribution does not contain any Hadoop dependencies, which are necessary to launch the application, as it assumes Dynamometer will be run from a machine which has a working installation of Hadoop. To include Dynamometer's Hadoop dependencies, use build/distributions/dynamometer-fat-VERSION.tar.

Usage

Scripts discussed below can be found in the bin directory of the distribution. The corresponding Java JAR files can be found in the lib directory.

Preparing Requisite Files

A number of steps are required in advance of starting your first Dyno-HDFS cluster:

  • Collect an fsimage and related files from your NameNode. This will include the fsimage_TXID file which the NameNode creates as part of checkpointing, the fsimage_TXID.md5 containing the md5 hash of the image, the VERSION file containing some metadata, and the fsimage_TXID.xml file which can be generated from the fsimage using the offline image viewer:

    hdfs oiv -i fsimage_TXID -o fsimage_TXID.xml -p XML
    

    It is recommended that you collect these files from your Secondary/Standby NameNode if you have one to avoid placing additional load on your Active NameNode.

    All of these files must be placed somewhere on HDFS where the various jobs will be able to access them. They should all be in the same folder, e.g. hdfs:///dyno/fsimage.

    All these steps can be automated with the upload-fsimage.sh script, e.g.:

    ./bin/upload-fsimage.sh 0001 hdfs:///dyno/fsimage
    

    Where 0001 is the transaction ID of the desired fsimage. See usage info of the script for more detail.

  • Collect the Hadoop distribution tarball to use to start the Dyno-NN and -DNs. For example, if testing against Hadoop 2.7.4, use hadoop-2.7.4.tar.gz. This distribution contains several components unnecessary for Dynamometer (e.g. YARN), so to reduce its size, you can optionally use the create-slim-hadoop-tar.sh script:

    ./bin/create-slim-hadoop-tar.sh hadoop-VERSION.tar.gz
    

    The Hadoop tar can be present on HDFS or locally where the client will be run from. Its path will be supplied to the client via the -hadoop_binary_path argument.

    Alternatively, if you use the -hadoop_version argument, you can simply specify which version you would like to run against (e.g. '2.7.4') and the client will attempt to download it automatically from an Apache mirror. See the usage information of the client for more details.

  • Prepare a configuration directory. You will need to specify a configuration directory with the standard Hadoop configuration layout, e.g. it should contain etc/hadoop/*-site.xml. This determines with what configuration the Dyno-NN and -DNs will be launched. Configurations that must be modified for Dynamometer to work properly (e.g. fs.defaultFS or dfs.namenode.name.dir) will be overridden at execution time. This can be a directory if it is available locally, else an archive file on local or remote (HDFS) storage.

Execute the Block Generation Job

This will use the fsimage_TXID.xml file to generate the list of blocks that each Dyno-DN should advertise to the Dyno-NN. It runs as a MapReduce job.

./bin/generate-block-lists.sh
    -fsimage_input_path hdfs:///dyno/fsimage/fsimage_TXID.xml
    -block_image_output_dir hdfs:///dyno/blocks
    -num_reducers R
    -num_datanodes D

In this example, the XML file uploaded above is used to generate block listings into hdfs:///dyno/blocks. R reducers are used for the job, and D block listings are generated - this will determine how many Dyno-DNs are started in the Dyno-HDFS cluster.

Prepare Audit Traces (Optional)

This step is only necessary if you intend to use the audit trace replay capabilities of Dynamometer; if you just intend to start a Dyno-HDFS cluster you can skip to the next section.

The audit trace replay accepts one input file per mapper, and currently supports two input formats, configurable via the auditreplay.command-parser.class configuration.

The default is a direct format, com.linkedin.dynamometer.workloadgenerator.audit.AuditLogDirectParser. This accepts files in the format produced by a standard configuration audit logger, e.g. lines like:

1970-01-01 00:00:42,000 INFO FSNamesystem.audit: allowed=true	ugi=hdfs	ip=/127.0.0.1	cmd=open	src=/tmp/foo	dst=null	perm=null	proto=rpc

When using this format you must also specify auditreplay.log-start-time.ms, which should be (in milliseconds since the Unix epoch) the start time of the audit traces. This is needed for all mappers to agree on a single start time. For example, if the above line was the first audit event, you would specify auditreplay.log-start-time.ms=42000.

The other supporter format is com.linkedin.dynamometer.workloadgenerator.audit.AuditLogHiveTableParser. This accepts files in the format produced by a Hive query with output fields, in order:

  • relativeTimestamp: event time offset, in milliseconds, from the start of the trace
  • ugi: user information of the submitting user
  • command: name of the command, e.g. 'open'
  • source: source path
  • dest: destination path
  • sourceIP: source IP of the event

Assuming your audit logs are available in Hive, this can be produced via a Hive query looking like:

INSERT OVERWRITE DIRECTORY '${outputPath}'
SELECT (timestamp - ${startTimestamp} AS relativeTimestamp, ugi, command, source, dest, sourceIP
FROM '${auditLogTableLocation}'
WHERE timestamp >= ${startTimestamp} AND timestamp < ${endTimestamp}
DISTRIBUTE BY src
SORT BY relativeTimestamp ASC;

Start the Infrastructure Application & Workload Replay

At this point you're ready to start up a Dyno-HDFS cluster and replay some workload against it! Note that the output from the previous two steps can be reused indefinitely.

The client which launches the Dyno-HDFS YARN application can optionally launch the workload replay job once the Dyno-HDFS cluster has fully started. This makes each replay into a single execution of the client, enabling easy testing of various configurations. You can also launch the two separately to have more control. Similarly, it is possible to launch Dyno-DNs for an external NameNode which is not controlled by Dynamometer/YARN. This can be useful for testing NameNode configurations which are not yet supported (e.g. HA NameNodes). You can do this by passing the -namenode_servicerpc_addr argument to the infrastructure application with a value that points to an external NameNode's service RPC address.

Manual Workload Launch

First launch the infrastructure application to begin the startup of the internal HDFS cluster, e.g.:

./bin/start-dynamometer-cluster.sh
    -hadoop_binary_path hadoop-2.7.4.tar.gz
    -conf_path my-hadoop-conf
    -fs_image_dir hdfs:///fsimage
    -block_list_path hdfs:///dyno/blocks

This demonstrates the required arguments. You can run this with the -help flag to see further usage information.

The client will track the Dyno-NN's startup progress and how many Dyno-DNs it considers live. It will notify via logging when the Dyno-NN has exited safemode and is ready for use.

At this point, a workload job (map-only MapReduce job) can be launched, e.g.:

./bin/start-workload.sh
    -Dauditreplay.input-path=hdfs:///dyno/audit_logs/
    -Dauditreplay.output-path=hdfs:///dyno/results/
    -Dauditreplay.num-threads=50
    -nn_uri hdfs://namenode_address:port/
    -start_time_offset 5m
    -mapper_class_name AuditReplayMapper

The type of workload generation is configurable; AuditReplayMapper replays an audit log trace as discussed previously. The AuditReplayMapper is configured via configurations; auditreplay.input-path, auditreplay.output-path and auditreplay.num-threads are required to specify the input path for audit log files, the output path for the results, and the number of threads per map task. A number of map tasks equal to the number of files in input-path will be launched; each task will read in one of these input files and use num-threads threads to replay the events contained within that file. A best effort is made to faithfully replay the audit log events at the same pace at which they originally occurred (optionally, this can be adjusted by specifying auditreplay.rate-factor which is a multiplicative factor towards the rate of replay, e.g. use 2.0 to replay the events at twice the original speed).

The AuditReplayMapper will output the benchmark results to a file part-r-00000 in the output directory in CSV format. Each line is in the format user,type,operation,numops,cumulativelatency, e.g. hdfs,WRITE,MKDIRS,2,150.

Integrated Workload Launch

To have the infrastructure application client launch the workload automatically, parameters for the workload job are passed to the infrastructure script. Only the AuditReplayMapper is supported in this fashion at this time. To launch an integrated application with the same parameters as were used above, the following can be used:

./bin/start-dynamometer-cluster.sh
    -hadoop_binary hadoop-2.7.4.tar.gz
    -conf_path my-hadoop-conf
    -fs_image_dir hdfs:///fsimage
    -block_list_path hdfs:///dyno/blocks
    -workload_replay_enable
    -workload_input_path hdfs:///dyno/audit_logs/
    -workload_output_path hdfs:///dyno/results/
    -workload_threads_per_mapper 50
    -workload_start_delay 5m

When run in this way, the client will automatically handle tearing down the Dyno-HDFS cluster once the workload has completed. To see the full list of supported parameters, run this with the -help flag.

More Repositories

1

school-of-sre

At LinkedIn, we are using this curriculum for onboarding our entry-level talents into the SRE role.
HTML
7,821
star
2

css-blocks

High performance, maintainable stylesheets.
TypeScript
6,335
star
3

Burrow

Kafka Consumer Lag Checking
Go
3,725
star
4

databus

Source-agnostic distributed change data capture system
Java
3,636
star
5

Liger-Kernel

Efficient Triton Kernels for LLM Training
Python
3,312
star
6

qark

Tool to look for several security related Android application vulnerabilities
Python
3,183
star
7

dustjs

Asynchronous Javascript templating for the browser and server
JavaScript
2,911
star
8

cruise-control

Cruise-control is the first of its kind to fully automate the dynamic workload rebalance and self-healing of a Kafka cluster. It provides great value to Kafka users by simplifying the operation of Kafka clusters.
Java
2,734
star
9

rest.li

Rest.li is a REST+JSON framework for building robust, scalable service architectures using dynamic discovery and simple asynchronous APIs.
Java
2,500
star
10

kafka-monitor

Xinfra Monitor monitors the availability of Kafka clusters by producing synthetic workloads using end-to-end pipelines to obtain derived vital statistics - E2E latency, service produce/consume availability, offsets commit availability & latency, message loss rate and more.
Java
2,016
star
11

dexmaker

A utility for doing compile or runtime code generation targeting Android's Dalvik VM
Java
1,863
star
12

greykite

A flexible, intuitive and fast forecasting library
Python
1,813
star
13

ambry

Distributed object store
Java
1,740
star
14

shiv

shiv is a command line utility for building fully self contained Python zipapps as outlined in PEP 441, but with all their dependencies included.
Python
1,729
star
15

swift-style-guide

LinkedIn's Official Swift Style Guide
1,430
star
16

dr-elephant

Dr. Elephant is a job and flow-level performance monitoring and tuning tool for Apache Hadoop and Apache Spark
Java
1,353
star
17

detext

DeText: A Deep Neural Text Understanding Framework for Ranking and Classification Tasks
Python
1,263
star
18

luminol

Anomaly Detection and Correlation library
Python
1,182
star
19

parseq

Asynchronous Java made easier
Java
1,165
star
20

oncall

Oncall is a calendar tool designed for scheduling and managing on-call shifts. It can be used as source of dynamic ownership info for paging systems like http://iris.claims.
Python
1,137
star
21

test-butler

Reliable Android Testing, at your service
Java
1,046
star
22

goavro

Go
972
star
23

PalDB

An embeddable write-once key-value store written in Java
Java
937
star
24

brooklin

An extensible distributed system for reliable nearline data streaming at scale
Java
919
star
25

iris

Iris is a highly configurable and flexible service for paging and messaging.
Python
807
star
26

photon-ml

A scalable machine learning library on Apache Spark
Terra
793
star
27

URL-Detector

A Java library to detect and normalize URLs in text
Java
782
star
28

coral

Coral is a translation, analysis, and query rewrite engine for SQL and other relational languages.
Java
781
star
29

Hakawai

A powerful, extensible UITextView.
Objective-C
781
star
30

eyeglass

NPM Modules for Sass
TypeScript
741
star
31

opticss

A CSS Optimizer
TypeScript
715
star
32

LiTr

Lightweight hardware accelerated video/audio transcoder for Android.
Java
609
star
33

kafka-tools

A collection of tools for working with Apache Kafka.
Python
592
star
34

pygradle

Using Gradle to build Python projects
Java
587
star
35

flashback

mock the internet
Java
578
star
36

FeatureFu

Library and tools for advanced feature engineering
Java
568
star
37

LayoutTest-iOS

Write unit tests which test the layout of a view in multiple configurations
Objective-C
564
star
38

FastTreeSHAP

Fast SHAP value computation for interpreting tree-based models
Python
509
star
39

venice

Venice, Derived Data Platform for Planet-Scale Workloads.
Java
487
star
40

Spyglass

A library for mentions on Android
Java
386
star
41

dagli

Framework for defining machine learning models, including feature generation and transformations, as directed acyclic graphs (DAGs).
Java
353
star
42

cruise-control-ui

Cruise Control Frontend (CCFE): Single Page Web Application to Manage Large Scale of Kafka Clusters
Vue
337
star
43

ml-ease

ADMM based large scale logistic regression
Java
333
star
44

openhouse

Open Control Plane for Tables in Data Lakehouse
Java
304
star
45

dph-framework

HTML
298
star
46

transport

A framework for writing performant user-defined functions (UDFs) that are portable across a variety of engines including Apache Spark, Apache Hive, and Presto.
Java
296
star
47

spark-tfrecord

Read and write Tensorflow TFRecord data from Apache Spark.
Scala
288
star
48

isolation-forest

A Spark/Scala implementation of the isolation forest unsupervised outlier detection algorithm with support for exporting in ONNX format.
Scala
224
star
49

LiFT

The LinkedIn Fairness Toolkit (LiFT) is a Scala/Spark library that enables the measurement of fairness in large scale machine learning workflows.
Scala
168
star
50

shaky-android

Shake to send feedback for Android.
Java
160
star
51

pyexchange

Python wrapper for Microsoft Exchange
Python
153
star
52

asciietch

A graphing library with the goal of making it simple to graphs using ascii characters.
Python
138
star
53

python-avro-json-serializer

Serializes data into a JSON format using AVRO schema.
Python
137
star
54

gdmix

A deep ranking personalization framework
Python
131
star
55

li-apache-kafka-clients

li-apache-kafka-clients is a wrapper library for the Apache Kafka vanilla clients. It provides additional features such as large message support and auditing to the Java producer and consumer in the open source Apache Kafka.
Java
131
star
56

Avro2TF

Avro2TF is designed to fill the gap of making users' training data ready to be consumed by deep learning training frameworks.
Scala
126
star
57

datahub-gma

General Metadata Architecture
Java
121
star
58

linkedin-gradle-plugin-for-apache-hadoop

Groovy
117
star
59

dex-test-parser

Find all test methods in an Android instrumentation APK
Kotlin
106
star
60

cassette

An efficient, file-based FIFO Queue for iOS and macOS.
Objective-C
95
star
61

spaniel

LinkedIn's JavaScript viewport tracking library and IntersectionObserver polyfill
JavaScript
92
star
62

Hoptimator

Multi-hop declarative data pipelines
Java
91
star
63

migz

Multithreaded, gzip-compatible compression and decompression, available as a platform-independent Java library and command-line utilities.
Java
79
star
64

avro-util

Collection of utilities to allow writing java code that operates across a wide range of avro versions.
Java
76
star
65

sysops-api

sysops-api is a framework designed to provide visability from tens of thousands of machines in seconds.
Python
74
star
66

iceberg

A temporary home for LinkedIn's changes to Apache Iceberg (incubating)
Java
62
star
67

DuaLip

DuaLip: Dual Decomposition based Linear Program Solver
Scala
59
star
68

kube2hadoop

Secure HDFS Access from Kubernetes
Java
59
star
69

dynoyarn

DynoYARN is a framework to run simulated YARN clusters and workloads for YARN scale testing.
Java
58
star
70

linkedin.github.com

Listing of all our public GitHub projects.
JavaScript
58
star
71

Tachyon

An Android library that provides a customizable calendar day view UI widget.
Java
57
star
72

Cytodynamics

Classloader isolation library.
Java
49
star
73

iris-relay

Stateless reverse proxy for thirdparty service integration with Iris API.
Python
48
star
74

concurrentli

Classes for multithreading that expand on java.util.concurrent, adding convenience, efficiency and new tools to multithreaded Java programs
Java
46
star
75

iris-mobile

A mobile interface for linkedin/iris, built for iOS and Android on the Ionic platform
TypeScript
42
star
76

lambda-learner

Lambda Learner is a library for iterative incremental training of a class of supervised machine learning models.
Python
41
star
77

TE2Rules

Python library to explain Tree Ensemble models (TE) like XGBoost, using a rule list.
Python
40
star
78

instantsearch-tutorial

Sample code for building an end-to-end instant search solution
JavaScript
39
star
79

PASS-GNN

Python
38
star
80

self-focused

Helps make a single page application more friendly to screen readers.
JavaScript
35
star
81

tracked-queue

An autotracked implementation of a ring-buffer-backed double-ended queue
TypeScript
35
star
82

QueryAnalyzerAgent

Analyze MySQL queries with negligible overhead
Go
35
star
83

performance-quality-models

Personalizing Performance model repository
Jupyter Notebook
31
star
84

data-integration-library

The Data Integration Library project provides a library of generic components based on a multi-stage architecture for data ingress and egress.
Java
28
star
85

Iris-message-processor

Iris-message-processor is a fully distributed Go application meant to replace the sender functionality of Iris and provide reliable, scalable, and extensible incident and out of band message processing and sending.
Go
27
star
86

smart-arg

Smart Arguments Suite (smart-arg) is a slim and handy python lib that helps one work safely and conveniently with command line arguments.
Python
23
star
87

linkedin-calcite

LinkedIn's version of Apache Calcite
Java
22
star
88

atscppapi

This library provides wrappers around the existing Apache Traffic Server API which will vastly simplify the process of writing Apache Traffic Server plugins.
C++
20
star
89

forthic

Python
18
star
90

high-school-trainee

LinkedIn Women in Tech High School Trainee Program
Python
18
star
91

play-parseq

Play-ParSeq is a Play module which seamlessly integrates ParSeq with Play Framework
Scala
17
star
92

icon-magic

Automated icon build system for iOS, Android and Web
TypeScript
17
star
93

QuantEase

QuantEase, a layer-wise quantization framework, frames the problem as discrete-structured non-convex optimization. Our work leverages Coordinate Descent techniques, offering high-quality solutions without the need for matrix inversion or decomposition.
Python
17
star
94

kafka-remote-storage-azure

Java
13
star
95

play-restli

A library that simplifies building restli services on top of the play server.
Java
12
star
96

spark-inequality-impact

Scala
12
star
97

Li-Airflow-Backfill-Plugin

Li-Airflow-Backfill-Plugin is a plugin to work with Apache Airflow to provide data backfill feature, ie. to rerun pipelines for a certain date range.
Python
10
star
98

AlerTiger

Jupyter Notebook
9
star
99

diderot

A fast and flexible implementation of the xDS protocol
Go
6
star
100

gobblin-elr

This is a read-only mirror of apache/gobblin
Java
5
star