• Stars
    star
    763
  • Rank 59,519 (Top 2 %)
  • Language
    Shell
  • License
    Apache License 2.0
  • Created over 10 years ago
  • Updated over 3 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Apache Spark on Docker

DockerPulls DockerStars

This repository contains a Docker file to build a Docker image with Apache Spark. This Docker image depends on our previous Hadoop Docker image, available at the SequenceIQ GitHub page. The base Hadoop Docker image is also available as an official Docker image.

##Pull the image from Docker Repository

docker pull sequenceiq/spark:1.6.0

Building the image

docker build --rm -t sequenceiq/spark:1.6.0 .

Running the image

  • if using boot2docker make sure your VM has more than 2GB memory
  • in your /etc/hosts file add $(boot2docker ip) as host 'sandbox' to make it easier to access your sandbox UI
  • open yarn UI ports when running container
docker run -it -p 8088:8088 -p 8042:8042 -p 4040:4040 -h sandbox sequenceiq/spark:1.6.0 bash

or

docker run -d -h sandbox sequenceiq/spark:1.6.0

Versions

Hadoop 2.6.0 and Apache Spark v1.6.0 on Centos

Testing

There are two deploy modes that can be used to launch Spark applications on YARN.

YARN-client mode

In yarn-client mode, the driver runs in the client process, and the application master is only used for requesting resources from YARN.

# run the spark shell
spark-shell \
--master yarn-client \
--driver-memory 1g \
--executor-memory 1g \
--executor-cores 1

# execute the the following command which should return 1000
scala> sc.parallelize(1 to 1000).count()

YARN-cluster mode

In yarn-cluster mode, the Spark driver runs inside an application master process which is managed by YARN on the cluster, and the client can go away after initiating the application.

Estimating Pi (yarn-cluster mode):

# execute the the following command which should write the "Pi is roughly 3.1418" into the logs
# note you must specify --files argument in cluster mode to enable metrics
spark-submit \
--class org.apache.spark.examples.SparkPi \
--files $SPARK_HOME/conf/metrics.properties \
--master yarn-cluster \
--driver-memory 1g \
--executor-memory 1g \
--executor-cores 1 \
$SPARK_HOME/lib/spark-examples-1.6.0-hadoop2.6.0.jar

Estimating Pi (yarn-client mode):

# execute the the following command which should print the "Pi is roughly 3.1418" to the screen
spark-submit \
--class org.apache.spark.examples.SparkPi \
--master yarn-client \
--driver-memory 1g \
--executor-memory 1g \
--executor-cores 1 \
$SPARK_HOME/lib/spark-examples-1.6.0-hadoop2.6.0.jar

Submitting from the outside of the container

To use Spark from outside of the container it is necessary to set the YARN_CONF_DIR environment variable to directory with a configuration appropriate for the docker. The repository contains such configuration in the yarn-remote-client directory.

export YARN_CONF_DIR="`pwd`/yarn-remote-client"

Docker's HDFS can be accessed only by root. When submitting Spark applications from outside of the cluster, and from a user different than root, it is necessary to configure the HADOOP_USER_NAME variable so that root user is used.

export HADOOP_USER_NAME=root

More Repositories

1

hadoop-docker

Hadoop docker image
Dockerfile
1,208
star
2

docker-ambari

Docker image with Ambari
Shell
292
star
3

sequenceiq-samples

SequenceIQ Hadoop examples
Java
116
star
4

docker-alpine-dig

Makefile
53
star
5

docker-kylin

Kylin running in a Docker cluster
Shell
46
star
6

docker-ngrokd

dockerized selfhosted ngrok deamon
Shell
41
star
7

docker-liquibase

Docker image with Liquibase
Shell
40
star
8

periscope

Periscope brings SLA policy based autoscaling to Hadoop
Java
36
star
9

docker-serf

Serf on Docker containers
Shell
34
star
10

docker-pam

33
star
11

docker-hadoop-ubuntu

A Hadoop image on Ubuntu
Shell
32
star
12

docker-phoenix

SQL on HBase with Apache Phoenix in Docker
Shell
30
star
13

yarn-monitoring

Hadoop YARN monitoring with R
R
19
star
14

cloudbreak-shell

CLI shell for the Cloudbreak project
16
star
15

docker-kerberos

KDC for Cloudbreak provisioned Hadoop clusters
Shell
15
star
16

docker-spark-native-yarn

13
star
17

docker-dnsmasq

dnsmasq on Docker containers
Shell
13
star
18

uluwatu

Uluwatu is a web UI for Cloudbreak - a cloud agnostic Hadoop as a Service API.
13
star
19

docker-baywatch-client

Logstash client for baywatch
Shell
12
star
20

docker-hadoop-build

Docker contaier to build Apache Hadoop
Shell
12
star
21

docker-baywatch

Elasticsearch and Kibana
Nginx
12
star
22

docker-tez

Create an official docker.io image with Hadoop 2.5, and Tez 0.5.0
Shell
10
star
23

vagrant-boxes

Ruby
9
star
24

docker-zedapp

Shell
9
star
25

docker-drill

Apache Drill Docker container
Shell
9
star
26

docker-ambari-shell

docker image to run ambari-shell
Shell
6
star
27

docker-busybox

dev tools on busybox
5
star
28

ambari-vagrant

Ambari on vagrant
Shell
5
star
29

cloudbreak-rest-client

Groovy client library for the Cloudbreak project
5
star
30

docker-enter

5
star
31

munchausen

docker
Go
5
star
32

hbase-client

SequenceIQ HBase utility/client library
Java
5
star
33

docker-hoya

A docker file to create an official docker.io image with Hadoop 2.3, Hortonworks Hoya 0.13 and HBase 0.98
Shell
4
star
34

azure-rest-client

Groovy client library for the Azure cloud
Groovy
4
star
35

docker-dind

Docker in Docker
Shell
4
star
36

bintray-cli

3
star
37

sequenceiq.github.io

sequenceiq GH page: http://sequenceiq.github.io/ offcial page:
HTML
3
star
38

docker-java

3
star
39

sultans

Centralized user management for SequenceIQ apps
HTML
3
star
40

docker-mkdocs

Docker container to generate nice documentation
CSS
3
star
41

consul-plugins-gcp-p12

Plugin to save a base64 encoded p12 file from Consul's KV store to a file
Shell
3
star
42

consul-plugins-gcs-connector

Plugin to download the Google Cloud Storage Connector for Hadoop
Shell
2
star
43

gunroot

modules for https://github.com/gliderlabs/glidergun
Shell
2
star
44

job-runner

Scala
2
star
45

docker-tag

Shell
2
star
46

cb-experimental

SQLPL
2
star
47

hadoop-cloud-scripts

Helper scripts to provision Hadoop in the cloud.
Shell
2
star
48

consul-plugins-spark

Shell
2
star
49

docker-ssh

Docker image with SSH server
Shell
2
star
50

docker-s3upload

Shell
2
star
51

docker-consul-watch-plugn

Shell
2
star
52

consul-plugins-titan

Shell
1
star
53

consul-plugins-install

Plugin to install a new plugin from a Github repository
Shell
1
star
54

docker-dev

Base dev Docker image
1
star
55

slider

Python
1
star
56

docker-knox

Hadoop 2.3 and Apache Knox 0.3.0 on Docker
Shell
1
star
57

ambari-formation

Python
1
star
58

aws-domain

Shell
1
star
59

docker-events

Shell
1
star
60

blog

company blog available at: blog.sequenceiq.com
JavaScript
1
star
61

docker-cloudbreak-releaser

Shell
1
star
62

docker-hipchatbot

CoffeeScript
1
star
63

cloudbreak-deployment

1
star