• Stars
    star
    189
  • Rank 204,649 (Top 5 %)
  • Language
    Java
  • License
    Apache License 2.0
  • Created about 7 years ago
  • Updated 4 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

DBeam exports SQL tables into Avro files using JDBC and Apache Beam

DBeam

Github Actions Build Status codecov.io Apache Licensed Maven Central

A connector tool to extract data from SQL databases and import into GCS using Apache Beam.

This tool is runnable locally, or on any other backend supported by Apache Beam, e.g. Cloud Dataflow.

DEVELOPMENT STATUS: Mature, maintained and used in production since August 2017. No major features or development planned.

Overview

DBeam is tool based that reads all the data from single SQL database table, converts the data into Avro and stores it into appointed location, usually in GCS. It runs as a single threaded Apache Beam pipeline.

DBeam requires the database credentials, the database table name to read, and the output location to store the extracted data into. DBeam first makes a single select into the target table with limit one to infer the table schema. After the schema is created the job will be launched which simply streams the table contents via JDBC into target location as Avro.

Generated Avro Schema Type Conversion Details

dbeam-core package features

  • Supports both PostgreSQL and MySQL JDBC connectors
  • Supports Google CloudSQL managed databases
  • Currently output only to Avro format
  • Reads database from an external password file (--passwordFile) or an external KMS encrypted password file (--passwordFileKmsEncrypted)
  • Can filter only records of the current day with the --partitionColumn parameter
  • Check and fail on too old partition dates. Snapshot dumps are not filtered by a given date/partition, when running for a too old partition, the job fails to avoid new data in old partitions. (can be disabled with --skipPartitionCheck)
  • Implemented as Apache Beam SDK pipeline, supporting any of its runners (tested with DirectRunner and DataflowRunner)

DBeam export parameters

com.spotify.dbeam.options.DBeamPipelineOptions:

  --connectionUrl=<String>
    The JDBC connection url to perform the export.
  --password=<String>
    Plaintext password used by JDBC connection.
  --passwordFile=<String>
    A path to a file containing the database password.
  --passwordFileKmsEncrypted=<String>
    A path to a file containing the database password, KMS encrypted and base64
    encoded.
  --sqlFile=<String>
    A path to a file containing a SQL query (used instead of --table parameter).
  --table=<String>
    The database table to query and perform the export.
  --username=<String>
    Default: dbeam-extractor
    The database user name used by JDBC to authenticate.

com.spotify.dbeam.options.OutputOptions:

  --output=<String>
    The path for storing the output.
  --dataOnly=<Boolean>
    Default: false
    Store only the data files in output folder, skip queries, metrics and
    metadata files.

com.spotify.dbeam.options.JdbcExportPipelineOptions:
    Configures the DBeam SQL export

  --avroCodec=<String>
    Default: deflate6
    Avro codec (e.g. deflate6, deflate9, snappy).
  --avroDoc=<String>
    The top-level record doc string of the generated avro schema.
  --avroSchemaFilePath=<String>
    Path to file with a target AVRO schema.
  --avroSchemaName=<String>
    The name of the generated avro schema, the table name by default.
  --avroSchemaNamespace=<String>
    Default: dbeam_generated
    The namespace of the generated avro schema.
  --exportTimeout=<String>
    Default: P7D
    Export timeout, after this duration the job is cancelled and the export
    terminated.
  --fetchSize=<Integer>
    Default: 10000
    Configures JDBC Statement fetch size.
  --limit=<Long>
    Limit the output number of rows, indefinite by default.
  --minPartitionPeriod=<String>
    The minimum partition required for the job not to fail (when partition
    column is not specified),by default `now() - 2*partitionPeriod`.
  --minRows=<Long>
    Default: -1
    Check that the output has at least this minimum number of rows. Otherwise
    fail the job.
  --partition=<String>
    The date/timestamp of the current partition.
  --partitionColumn=<String>
    The name of a date/timestamp column to filter data based on current
    partition.
  --partitionPeriod=<String>
    The period frequency which the export runs, used to filter based on current
    partition and also to check if exports are running for too old partitions.
  --preCommand=<List>
    SQL commands to be executed before query.
  --queryParallelism=<Integer>
    Max number of queries to run in parallel for exports. Single query used if
    nothing specified. Should be used with splitColumn.
  --skipPartitionCheck=<Boolean>
    Default: false
    When partition column is not specified, fails if partition is too old; set
    this flag to ignore this check.
  --splitColumn=<String>
    A long/integer column used to create splits for parallel queries. Should be
    used with queryParallelism.
  --useAvroLogicalTypes=<Boolean>
    Default: false
    Controls whether generated Avro schema will contain logicalTypes or not.

Input Avro schema file

If provided an input Avro schema file, dbeam will read input schema file and use some of the properties when an output Avro schema is created.

Following fields will be propagated from input into output schema:

  • record.doc
  • record.namespace
  • record.field.doc

DBeam Parallel Mode

This is a pre-alpha feature currently under development and experimentation.

Read queries used by dbeam to extract data generally don't place any locks, and hence multiple read queries can run in parallel. When running in parallel mode with --queryParallelism specified, dbeam looks for --splitColumn argument to find the max and min values in that column. The max and min are then used as range bounds for generating queryParallelism number of queries which are then run in parallel to read data. Since the splitColumn is used to calculate the query bounds, and dbeam needs to calculate intermediate bounds for each query, the type of the column must be long / int. It is assumed that the distribution of values on the splitColumn is sufficiently random and sequential. Example if the min and max of the split column is divided equally into query parallelism parts, each part would contain approximately equal number of records. Having skews in this data would result in straggling queries, and hence wont provide much improvement. Having the records sequential would help in having the queries run faster and it would reduce random disk seeks.

Recommended usage: Beam would run each query generated by DBeam in 1 dedicated vCPU (when running with Dataflow Runner), thus for best performance it is recommended that the total number of vCPU available for a given job should be equal to the queryParallelism specified. Hence if workerMachineType for Dataflow is n1-standard-w and numWorkers is n then queryParallelism q should be a multiple of n*w and the job would be fastest if q = n * w.

For an export of a table running from a dedicated PostgresQL replica, we have seen best performance over vCPU time and wall time when having a queryParallelism of 16. Bumping queryParallelism further increases the vCPU time without offering much gains on the wall time of the complete export. It is probably good to use queryParallelism less than 16 for experimenting.

Building

Building and testing can be achieved with mvn:

mvn verify

In order to create a jar with all dependencies under ./dbeam-core/target/dbeam-core-shaded.jar run the following:

mvn clean package -Ppack

Usage examples

Using Java from the command line:

java -cp ./dbeam-core/target/dbeam-core-shaded.jar \
  com.spotify.dbeam.jobs.JdbcAvroJob \
  --output=gs://my-testing-bucket-name/ \
  --username=my_database_username \
  --password=secret \
  --connectionUrl=jdbc:postgresql://some.database.uri.example.org:5432/my_database \
  --table=my_table

For CloudSQL:

java -cp ./dbeam-core/target/dbeam-core-shaded.jar \
  com.spotify.dbeam.jobs.JdbcAvroJob \
  --output=gs://my-testing-bucket-name/ \
  --username=my_database_username \
  --password=secret \
  --connectionUrl=jdbc:postgresql://google/database?socketFactory=com.google.cloud.sql.postgres.SocketFactory&socketFactoryArg=project:region:cloudsql-instance \
  --table=my_table
  • When using MySQL: --connectionUrl=jdbc:mysql://google/database?socketFactory=com.google.cloud.sql.mysql.SocketFactory&cloudSqlInstance=project:region:cloudsql-instance&useCursorFetch=true
  • Note ?useCursorFetch=true is important for MySQL, to avoid early fetching all rows, more details on MySQL docs.
  • More details can be found at CloudSQL JDBC SocketFactory

To run a cheap data extraction, as a way to validate, one can add --limit=10 --skipPartitionCheck parameters. It will run the queries, generate the schemas and export only 10 records, which should be done in a few seconds.

Password configuration

Database password can be configured by simply passing --password=writepasswordhere, --passwordFile=/path/to/file/containing/password or --passwordFile=gs://gcs-bucket/path/to/file/containing/password.

A more robust configuration is to point to a Google KMS encrypted file. DBeam will try to decrypt using KMS if the file ends with .encrypted (e.g. --passwordFileKmsEncrypted=gs://gcs-bucket/path/to/db-password.encrypted).

The file should contain a base64 encoded encrypted content. It can be generated using gcloud like the following:

echo -n "super_secret_password" \
  | gcloud kms encrypt \
      --location "global" \
      --keyring "dbeam" \
      --key "default" \
      --project "mygcpproject" \
      --plaintext-file - \
      --ciphertext-file - \
  | base64 \
  | gsutil cp - gs://gcs-bucket/path/to/db-password.encrypted

KMS location, keyring, and key can be configured via Java Properties, defaults are:

java \
  -DKMS_KEYRING=dbeam \
  -DKMS_KEY=default \
  -DKMS_LOCATION=global \
  -DKMS_PROJECT=default_gcp_project \
  -cp ./dbeam-core/target/dbeam-core-shaded.jar \
  com.spotify.dbeam.jobs.JdbcAvroJob \
  ...

Using as a library

To include DBeam library in a mvn project add the following dependency in pom.xml:

<dependency>
  <groupId>com.spotify</groupId>
  <artifactId>dbeam-core</artifactId>
  <version>${dbeam.version}</version>
</dependency>

To include DBeam library in a SBT project add the following dependency in build.sbt:

  libraryDependencies ++= Seq(
   "com.spotify" % "dbeam-core" % dbeamVersion
  )

Development

Make sure you have mvn installed. For editor, IntelliJ IDEA is recommended.

To test and verify changes during development, run:

mvn verify

Or:

mvn verify -Pcoverage

This project adheres to the Open Code of Conduct. By participating, you are expected to honor this code.

Release

Every push to master will deploy a snapshot version to Sonatype. You can check the deployment in the following links:


License

Copyright 2016-2022 Spotify AB.

Licensed under the Apache License, Version 2.0: http://www.apache.org/licenses/LICENSE-2.0


More Repositories

1

luigi

Luigi is a Python module that helps you build complex pipelines of batch jobs. It handles dependency resolution, workflow management, visualization etc. It also comes with Hadoop support built in.
Python
17,796
star
2

annoy

Approximate Nearest Neighbors in C++/Python optimized for memory usage and loading/saving to disk
C++
13,197
star
3

pedalboard

🎛 🔊 A Python library for audio.
C++
5,147
star
4

docker-gc

INACTIVE: Docker garbage collection of containers and images
Shell
5,068
star
5

chartify

Python library that makes it easy for data scientists to create charts.
Python
3,510
star
6

basic-pitch

A lightweight yet powerful audio-to-MIDI converter with pitch bend detection
Python
3,184
star
7

dockerfile-maven

MATURE: A set of Maven tools for dealing with Dockerfiles
Java
2,756
star
8

docker-maven-plugin

INACTIVE: A maven plugin for Docker
Java
2,652
star
9

scio

A Scala API for Apache Beam and Google Cloud Dataflow.
Scala
2,485
star
10

helios

Docker container orchestration platform
Java
2,097
star
11

web-api-examples

Basic examples to authenticate and fetch data using the Spotify Web API
HTML
1,889
star
12

HubFramework

DEPRECATED – Spotify’s component-driven UI framework for iOS
Objective-C
1,861
star
13

apollo

Java libraries for writing composable microservices
Java
1,648
star
14

dh-virtualenv

Python virtualenvs in Debian packages
Python
1,614
star
15

docker-client

INACTIVE: A simple docker client for the JVM
Java
1,431
star
16

docker-kafka

Kafka (and Zookeeper) in Docker
Shell
1,399
star
17

SPTPersistentCache

Everyone tries to implement a cache at some point in their iOS app’s lifecycle, and this is ours.
Objective-C
1,243
star
18

voyager

🛰️ An approximate nearest-neighbor search library for Python and Java with a focus on ease of use, simplicity, and deployability.
C++
1,242
star
19

mobius

A functional reactive framework for managing state evolution and side-effects.
Java
1,223
star
20

sparkey

Simple constant key/value storage library, for read-heavy systems with infrequent large bulk inserts.
C
1,178
star
21

ruler

Gradle plugin which helps you analyze the size of your Android apps.
Kotlin
1,130
star
22

XCMetrics

XCMetrics is the easiest way to collect Xcode build metrics and improve developer productivity.
Swift
1,102
star
23

web-api

This issue tracker is no longer used. Join us in the Spotify for Developers forum for support with the Spotify Web API ➡️ https://community.spotify.com/t5/Spotify-for-Developers/bd-p/Spotify_Developer
RAML
981
star
24

echoprint-codegen

Codegen for Echoprint
C++
948
star
25

snakebite

A pure python HDFS client
Python
856
star
26

heroic

The Heroic Time Series Database
Java
843
star
27

klio

Smarter data pipelines for audio.
Python
836
star
28

XCRemoteCache

Swift
830
star
29

ios-sdk

Spotify SDK for iOS
Objective-C
643
star
30

SPTDataLoader

The HTTP library used by the Spotify iOS client
Objective-C
630
star
31

apps-tutorial

A Spotify App that contains working examples of the use of Spotify Apps API
627
star
32

JniHelpers

Tools for writing great JNI code
C++
593
star
33

postgresql-metrics

Tool that extracts and provides metrics on your PostgreSQL database
Python
590
star
34

Mobius.swift

A functional reactive framework for managing state evolution and side-effects [Swift implementation]
Swift
557
star
35

reactochart

📈 React chart component library 📉
JavaScript
552
star
36

dockerfile-mode

An emacs mode for handling Dockerfiles
Emacs Lisp
535
star
37

threaddump-analyzer

A JVM threaddump analyzer
JavaScript
488
star
38

featran

A Scala feature transformation library for data science and machine learning
Scala
467
star
39

android-sdk

Spotify SDK for Android
HTML
457
star
40

echoprint-server

Server for the Echoprint audio fingerprint system
Java
395
star
41

completable-futures

Utilities for working with futures in Java 8
Java
393
star
42

web-scripts

DEPRECATED: A collection of base configs and CLI wrappers used to speed up development @ Spotify.
TypeScript
383
star
43

spotify-web-api-ts-sdk

A Typescript SDK for the Spotify Web API with types for returned data.
TypeScript
356
star
44

SpotifyLogin

Swift framework for authenticating with the Spotify API
Swift
347
star
45

ratatool

A tool for data sampling, data generation, and data diffing
Scala
338
star
46

fmt-maven-plugin

Opinionated Maven Plugin that formats your Java code.
Java
324
star
47

coordinator

A visual interface for turning an SVG into XY coördinates.
HTML
288
star
48

big-data-rosetta-code

Code snippets for solving common big data problems in various platforms. Inspired by Rosetta Code
Scala
287
star
49

trickle

A small library for composing asynchronous code
Java
285
star
50

pythonflow

🐍 Dataflow programming for python.
Python
285
star
51

styx

"The path to execution", Styx is a service that schedules batch data processing jobs in Docker containers on Kubernetes.
Java
266
star
52

cstar

Apache Cassandra cluster orchestration tool for the command line
Python
254
star
53

confidence

Python
254
star
54

netty-zmtp

A Netty implementation of ZMTP, the ZeroMQ Message Transport Protocol.
Java
243
star
55

ios-style

Guidelines for iOS development in use at Spotify
243
star
56

cassandra-reaper

Software to run automated repairs of cassandra
235
star
57

docker-cassandra

Cassandra in Docker with fast startup
Shell
220
star
58

basic-pitch-ts

A lightweight yet powerful audio-to-MIDI converter with pitch bend detection.
TypeScript
216
star
59

terraform-gke-kubeflow-cluster

Terraform module for creating GKE clusters to run Kubeflow
HCL
213
star
60

linux

Spotify's Linux kernel for Debian-based systems
C
208
star
61

dns-java

DNS wrapper library that provides SRV lookup functionality
Java
206
star
62

git-test

test your commits
Shell
203
star
63

SPStackedNav

[DEPRECATED] Navigation controller which represents its content in stacks of panes, rather than one at a time
Objective-C
195
star
64

spotify-json

Fast and nice to use C++ JSON library.
C++
194
star
65

quickstart

A CommonJS module resolver, loader and compiler for node.js and browsers.
JavaScript
193
star
66

flink-on-k8s-operator

Kubernetes operator for managing the lifecycle of Apache Flink and Beam applications.
Go
185
star
67

bazel-tools

Tools for dealing with very large Bazel-managed repositories
Java
166
star
68

magnolify

A collection of Magnolia add-on modules
Scala
163
star
69

dataenum

Algebraic data types in Java.
Java
163
star
70

lingon

A user friendly tool for building single-page JavaScript applications
JavaScript
162
star
71

async-google-pubsub-client

[SUNSET] Async Google Pubsub Client
Java
158
star
72

gcp-audit

A tool for auditing security properties of GCP projects.
Python
157
star
73

spark-bigquery

Google BigQuery support for Spark, SQL, and DataFrames
Scala
155
star
74

should-up

Remove most of the "should" noise from your tests
JavaScript
153
star
75

folsom

An asynchronous memcache client for Java
Java
147
star
76

missinglink

Build time tool for detecting link problems in java projects
Java
146
star
77

flo

A lightweight workflow definition library
Java
146
star
78

spotify-web-playback-sdk-example

React based example app that creates a new player in Spotify Connect to play music from in the browse using Spotify Web Playback SDK.
JavaScript
144
star
79

android-auth

Spotify authentication and authorization for Android. Part of the Spotify Android SDK.
HTML
143
star
80

proto-registry

An implementation of the Protobuf Registry API
TypeScript
141
star
81

futures-extra

Java library for working with Guava futures
Java
138
star
82

zoltar

Common library for serving TensorFlow, XGBoost and scikit-learn models in production.
Java
138
star
83

annoy-java

Approximate nearest neighbors in Java
Java
138
star
84

spydra

Ephemeral Hadoop clusters using Google Compute Platform
Java
134
star
85

github-java-client

A Java client to Github API
Java
129
star
86

docker-stress

Simple docker stress test and monitoring tools
Python
125
star
87

spotify-tensorflow

Provides Spotify-specific TensorFlow helpers
Python
124
star
88

crtauth

a public key backed client/server authentication system
Python
118
star
89

sparkey-java

Java implementation of the Sparkey key value store
Java
118
star
90

redux-location-state

Utilities for reading & writing Redux store state to & from the URL
JavaScript
118
star
91

realbook

Easier audio-based machine learning with TensorFlow.
Python
112
star
92

rspec-dns

Easily test your DNS with RSpec
Ruby
107
star
93

web-playback-sdk

This issue tracker is no longer used. Join us in the Spotify for Developers forum for support with the Spotify Web Playback SDK ➡️ https://community.spotify.com/t5/Spotify-for-Developers/bd-p/Spotify_Developer
107
star
94

ffwd-ruby

An event and metrics fast-forwarding agent.
Ruby
105
star
95

gimme

Creating time bound IAM Conditions with ease and flair
Python
103
star
96

super-smash-brogp

Sends and withdraws BGP prefixes for fun.
Python
98
star
97

spotify.github.io

Showcase site for hand-picked open-source projects by Spotify
HTML
96
star
98

lighthouse-audit-service

TypeScript
95
star
99

python-graphwalker

Python re-implementation of the graphwalker testing tool
Python
93
star
100

noether

Scala Aggregators used for ML Model metrics monitoring
Scala
91
star