• This repository has been archived on 04/Jan/2022
  • Stars
    star
    202
  • Rank 193,691 (Top 4 %)
  • Language
    Python
  • License
    Other
  • Created over 4 years ago
  • Updated over 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Manage your Databricks deployments and CI with code.

[DEPRECATED] Databricks Labs CI/CD Templates

This repository provides a template for automated Databricks CI/CD pipeline creation and deployment.

NOTE: This repository is deprecated and provided for maintenance purposes only. Please use the dbx init functionality instead.

Table of Contents

Sample project structure (with GitHub Actions)

.
โ”œโ”€โ”€ .dbx
โ”‚ย ย  โ””โ”€โ”€ project.json
โ”œโ”€โ”€ .github
โ”‚ย ย  โ””โ”€โ”€ workflows
โ”‚ย ย      โ”œโ”€โ”€ onpush.yml
โ”‚ย ย      โ””โ”€โ”€ onrelease.yml
โ”œโ”€โ”€ .gitignore
โ”œโ”€โ”€ README.md
โ”œโ”€โ”€ conf
โ”‚ย ย  โ”œโ”€โ”€ deployment.json
โ”‚ย ย  โ””โ”€โ”€ test
โ”‚ย ย      โ””โ”€โ”€ sample.json
โ”œโ”€โ”€ pytest.ini
โ”œโ”€โ”€ sample_project
โ”‚ย ย  โ”œโ”€โ”€ __init__.py
โ”‚ย ย  โ”œโ”€โ”€ common.py
โ”‚ย ย  โ””โ”€โ”€ jobs
โ”‚ย ย      โ”œโ”€โ”€ __init__.py
โ”‚ย ย      โ””โ”€โ”€ sample
โ”‚ย ย          โ”œโ”€โ”€ __init__.py
โ”‚ย ย          โ””โ”€โ”€ entrypoint.py
โ”œโ”€โ”€ setup.py
โ”œโ”€โ”€ tests
โ”‚ย ย  โ”œโ”€โ”€ integration
โ”‚ย ย  โ”‚ย ย  โ””โ”€โ”€ sample_test.py
โ”‚ย ย  โ””โ”€โ”€ unit
โ”‚ย ย      โ””โ”€โ”€ sample_test.py
โ””โ”€โ”€ unit-requirements.txt

Some explanations regarding structure:

  • .dbx folder is an auxiliary folder, where metadata about environments and execution context is located.
  • sample_project - Python package with your code (the directory name will follow your project name)
  • tests - directory with your package tests
  • conf/deployment.json - deployment configuration file. Please read the following section for a full reference.
  • .github/workflows/ - workflow definitions for GitHub Actions

Sample project structure (with Azure DevOps)

.
โ”œโ”€โ”€ .dbx
โ”‚ย ย  โ””โ”€โ”€ project.json
โ”œโ”€โ”€ .gitignore
โ”œโ”€โ”€ README.md
โ”œโ”€โ”€ azure-pipelines.yml
โ”œโ”€โ”€ conf
โ”‚ย ย  โ”œโ”€โ”€ deployment.json
โ”‚ย ย  โ””โ”€โ”€ test
โ”‚ย ย      โ””โ”€โ”€ sample.json
โ”œโ”€โ”€ pytest.ini
โ”œโ”€โ”€ sample_project_azure_dev_ops
โ”‚ย ย  โ”œโ”€โ”€ __init__.py
โ”‚ย ย  โ”œโ”€โ”€ common.py
โ”‚ย ย  โ””โ”€โ”€ jobs
โ”‚ย ย      โ”œโ”€โ”€ __init__.py
โ”‚ย ย      โ””โ”€โ”€ sample
โ”‚ย ย          โ”œโ”€โ”€ __init__.py
โ”‚ย ย          โ””โ”€โ”€ entrypoint.py
โ”œโ”€โ”€ setup.py
โ”œโ”€โ”€ tests
โ”‚ย ย  โ”œโ”€โ”€ integration
โ”‚ย ย  โ”‚ย ย  โ””โ”€โ”€ sample_test.py
โ”‚ย ย  โ””โ”€โ”€ unit
โ”‚ย ย      โ””โ”€โ”€ sample_test.py
โ””โ”€โ”€ unit-requirements.txt

Some explanations regarding structure:

  • .dbx folder is an auxiliary folder, where metadata about environments and execution context is located.
  • sample_project_azure_dev_ops - Python package with your code (the directory name will follow your project name)
  • tests - directory with your package tests
  • conf/deployment.json - deployment configuration file. Please read the following section for a full reference.
  • azure-pipelines.yml - Azure DevOps Pipelines workflow definition

Sample project structure (with GitLab)

.
โ”œโ”€โ”€ .dbx
โ”‚ย ย  โ””โ”€โ”€ project.json
โ”œโ”€โ”€ .gitignore
โ”œโ”€โ”€ README.md
โ”œโ”€โ”€ .gitlab-ci.yml
โ”œโ”€โ”€ conf
โ”‚ย ย  โ”œโ”€โ”€ deployment.json
โ”‚ย ย  โ””โ”€โ”€ test
โ”‚ย ย      โ””โ”€โ”€ sample.json
โ”œโ”€โ”€ pytest.ini
โ”œโ”€โ”€ sample_project_gitlab
โ”‚ย ย  โ”œโ”€โ”€ __init__.py
โ”‚ย ย  โ”œโ”€โ”€ common.py
โ”‚ย ย  โ””โ”€โ”€ jobs
โ”‚ย ย      โ”œโ”€โ”€ __init__.py
โ”‚ย ย      โ””โ”€โ”€ sample
โ”‚ย ย          โ”œโ”€โ”€ __init__.py
โ”‚ย ย          โ””โ”€โ”€ entrypoint.py
โ”œโ”€โ”€ setup.py
โ”œโ”€โ”€ tests
โ”‚ย ย  โ”œโ”€โ”€ integration
โ”‚ย ย  โ”‚ย ย  โ””โ”€โ”€ sample_test.py
โ”‚ย ย  โ””โ”€โ”€ unit
โ”‚ย ย      โ””โ”€โ”€ sample_test.py
โ””โ”€โ”€ unit-requirements.txt

Some explanations regarding structure:

  • .dbx folder is an auxiliary folder, where metadata about environments and execution context is located.
  • sample_project_gitlab - Python package with your code (the directory name will follow your project name)
  • tests - directory with your package tests
  • conf/deployment.json - deployment configuration file. Please read the following section for a full reference.
  • .gitlab-ci.yml - GitLab CI/CD workflow definition

Note on dbx

NOTE:
dbx is a CLI tool for advanced Databricks jobs management. It can be used separately from cicd-templates, and if you would like to preserve your project structure, please refer to dbx documentation on how to use it with customized project structure.

Quickstart

NOTE:
As a prerequisite, you need to install databricks-cli with a configured profile. In this instruction we're based on Databricks Runtime 7.3 LTS ML. If you don't need to use ML libraries, we still recommend to use ML-based version due to %pip magic support.

Local steps

Perform the following actions in your development environment:

  • Create new conda environment and activate it:
conda create -n <your-environment-name> python=3.7.5
conda activate <your-environment-name>
  • If you would like to be able to run local unit tests, you'll need JDK. If you don't have one, It can be installed via:
conda install -c anaconda "openjdk=8.0.152"
  • Install cookiecutter and path:
pip install cookiecutter path
  • Create new project using cookiecutter template:
cookiecutter https://github.com/databrickslabs/cicd-templates
  • Install development dependencies:
pip install -r unit-requirements.txt
  • Install generated package in development mode:
pip install -e .
  • In the generated directory you'll have a sample job with testing and launch configurations around it.
  • Launch and debug your code on an interactive cluster via the following command. Job name could be found in conf/deployment.json:
dbx execute --cluster-name=<my-cluster> --job=<job-name>
  • Make your first deployment from the local machine:
dbx deploy
  • Launch your first pipeline as a new separate job, and trace the job status. Job name could be found in conf/deployment.json:
dbx launch --job <your-job-name> --trace
  • For an in-depth local development and unit testing guidance, please refer to a generated README.md in the root of the project.

Setting up CI/CD pipeline on GitHub Actions

  • Create a new repository on GitHub
  • Configure DATABRICKS_HOST and DATABRICKS_TOKEN secrets for your project in GitHub UI
  • Add a remote origin to the local repo
  • Push the code
  • Open the GitHub Actions for your project to verify the state of the deployment pipeline

Setting up CI/CD pipeline on Azure DevOps

  • Create a new repository on GitHub
  • Connect the repository to Azure DevOps
  • Configure DATABRICKS_HOST and DATABRICKS_TOKEN secrets for your project in Azure DevOps. Note that secret variables must be mapped to env as mentioned here using the syntax env: for example:
variables:
- group: Databricks-environment
stages:
...
...
    - script: |
        dbx deploy
      env:
        DATABRICKS_TOKEN: $(DATABRICKS_TOKEN)
  • Add a remote origin to the local repo
  • Push the code
  • Open the Azure DevOps UI to check the deployment status

Setting up CI/CD pipeline on Gitlab

  • Create a new repository on Gitlab
  • Configure DATABRICKS_HOST and DATABRICKS_TOKEN secrets for your project in GitLab UI
  • Add a remote origin to the local repo
  • Push the code
  • Open the GitLab CI/CD UI to check the deployment status

Deployment file structure

A sample deployment file could be found in a generated project.

General file structure could look like this:

{
    "<environment-name>": {
        "jobs": [
            {
                "name": "sample_project-sample",
                "existing_cluster_id": "some-cluster-id", 
                "libraries": [],
                "max_retries": 0,
                "spark_python_task": {
                    "python_file": "sample_project/jobs/sample/entrypoint.py",
                    "parameters": [
                        "--conf-file",
                        "conf/test/sample.json"
                    ]
                }
            }
        ]
    }
}

Per each environment you could describe any amount of jobs. Job description should follow the Databricks Jobs API.

However, there is some advanced behaviour for a dbx deploy.

When you run dbx deploy with a given deployment file (by default it takes the deployment file from conf/deployment.json), the following actions will be performed:

  • Find the deployment configuration in --deployment-file (default: conf/deployment.json)
  • Build .whl package in a given project directory (could be disabled via --no-rebuild option)
  • Add this .whl package to a job definition
  • Add all requirements from --requirements-file (default: requirements.txt). Step will be skipped if requirements file is non-existent.
  • Create a new job or adjust existing job if the given job name exists. Job will be found by it's name.

Important thing about referencing is that you can also reference arbitrary local files. This is very handy for python_file section. In the example above, the entrypoint file and the job configuration will be added to the job definition and uploaded to dbfs automatically. No explicit file upload is needed.

Different deployment types

Databricks Jobs API provides two methods for launching a particular workload:

Main logical difference between these methods is that Run Submit API allows to submit a workload directly without creating a job. Therefore, we have two deployment types - one for Run Submit API, and one for Run Now API.

Deployment for Run Submit API

To deploy only the files and not to override the job definitions, do the following:

dbx deploy --files-only

To launch the file-based deployment:

dbx launch --as-run-submit --trace

This type of deployment is handy for working in different branches, and it won't affect the job definition.

Deployment for Run Now API

To deploy files and update the job definitions:

dbx deploy

To launch the file-based deployment:

dbx launch --job=<job-name>

This type of deployment shall be mainly used in automated way during new release. dbx deploy will change the job definition (unless --files-only option is provided).

Troubleshooting

Q: When running dbx deploy I'm getting the following exception json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0) and stack trace:

...
  File ".../lib/python3.7/site-packages/dbx/utils/common.py", line 215, in prepare_environment
    experiment = mlflow.get_experiment_by_name(environment_data["workspace_dir"])
...

json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)

What could be causing it and what is the potential fix?

A:
We've seen this exception when in the profile the host=https://{domain}/?o={orgid} format is used for Azure. It is valid for the databricks cli, but not for the API. If that's the cause, once the "?o={orgid}" suffix is removed, the problem should be gone.

FAQ

Q: I'm using poetry for package management. Is it possible to use poetry together with this template?

A:
Yes, it's also possible, but the library management during cluster execution should be performed via libraries section of job description. You also might need to disable the automatic rebuild for dbx deploy and dbx execute via --no-rebuild option. Finally, the built package should be in wheel format and located in /dist/ directory.

Q: How can I add my Databricks Notebook to the deployment.json, so I can create a job out of it?

A:
Please follow this documentation section and add a notebook task definition into the deployment file.

Q: Is it possible to use dbx for non-Python based projects, for example Scala-based projects?

A:
Yes, it's possible, but the interactive mode dbx execute is not yet supported. However, you can just take the dbx wheel to your Scala-based project and reference your jar files in the deployment file, so the dbx deploy and dbx launch commands be available for you.

Q: I have a lot of interdependent jobs, and using solely JSON seems like a giant code duplication. What could solve this problem?

A:
You can implement any configuration logic and simply write the output into a custom deployment.json file and then pass it via --deployment-file option. As an example, you can generate your configuration using Python script, or Jsonnet.

Q: How can I secure the project environment?

A:
From the state serialization perspective, your code and deployments are stored in two separate storages:

  • workspace directory -this directory is stored in your workspace, described per-environment and defined in .dbx/project.json, in workspace_dir field. To control access to this directory, please use Workspace ACLs.
  • artifact location - this location is stored in DBFS, described per-environment and defined in .dbx/project.json, in artifact_location field. To control access this location, please use credentials passthrough (docs for ADLS and for S3).

Q: I would like to use self-hosted (private) pypi repository. How can I configure my deployment and CI/CD pipeline?

A:
To set up this scenario, there are some settings to be applied:

  • Databricks driver should have network access to your pypi repository
  • Additional step to deploy your package to pypi repo should be configured in CI/CD pipeline
  • Package re-build and generation should be disabled via --no-rebuild --no-package arguments for dbx execute
  • Package reference should be configured in job description

Here is a sample for dbx deploy command:

dbx deploy --no-rebuild --no-package

Sample section to libraries configuration:

{
    "pypi": {"package": "my-package-name==1.0.0", "repo": "my-repo.com"}
}

Q: What is the purpose of init_adapter method in SampleJob?

A: This method should be primarily used for adapting configuration for dbx execute based run. By using this method, you can provide an initial configuration in case if --conf-file option is not provided.

Q: I don't like the idea of hosting the host and token variables into ~/.databrickscfg file inside the CI pipeline. How can I make this setup more secure?

A:
dbx now supports environment variables, provided via DATABRICKS_HOST and DATABRICKS_TOKEN. If these variables are defined in env, no ~/.databrickscfg file needed.

Legal Information

This software is provided as-is and is not officially supported by Databricks through customer technical support channels. Support, questions, and feature requests can be communicated through the Issues page of this repo. Please see the legal agreement and understand that issues with the use of this code will not be answered or investigated by Databricks Support.

Feedback

Issues with template? Found a bug? Have a great idea for an addition? Feel free to file an issue.

Contributing

Have a great idea that you want to add? Fork the repo and submit a PR!

Kudos

More Repositories

1

dolly

Databricksโ€™ Dolly, a large language model trained on the Databricks Machine Learning Platform
Python
10,811
star
2

pyspark-ai

English SDK for Apache Spark
Python
739
star
3

dbx

๐Ÿงฑ Databricks CLI eXtensions - aka dbx is a CLI tool for development and advanced Databricks workflows management.
Python
440
star
4

dbldatagen

Generate relevant synthetic data quickly for your projects. The Databricks Labs synthetic data generator (aka `dbldatagen`) may be used to generate large simulated / synthetic data sets for test, POCs, and other uses in Databricks environments including in Delta Live Tables pipelines
Python
313
star
5

tempo

API for manipulating time series on top of Apache Spark: lagged time values, rolling statistics (mean, avg, sum, count, etc), AS OF joins, downsampling, and interpolation
Jupyter Notebook
306
star
6

mosaic

An extension to the Apache Spark framework that allows easy and fast processing of very large geospatial datasets.
Jupyter Notebook
270
star
7

overwatch

Capture deep metrics on one or all assets within a Databricks workspace
Scala
226
star
8

ucx

Automated migrations to Unity Catalog
Python
220
star
9

automl-toolkit

Toolkit for Apache Spark ML for Feature clean-up, feature Importance calculation suite, Information Gain selection, Distributed SMOTE, Model selection and training, Hyper parameter optimization and selection, Model interprability.
HTML
191
star
10

migrate

Old scripts for one-off ST-to-E2 migrations. Use "terraform exporter" linked in the readme.
Python
186
star
11

dlt-meta

Metadata driven Databricks Delta Live Tables framework for bronze/silver pipelines
Python
147
star
12

dataframe-rules-engine

Extensible Rules Engine for custom Dataframe / Dataset validation
Scala
134
star
13

discoverx

A Swiss-Army-knife for your Data Intelligence platform administration.
Python
105
star
14

geoscan

Geospatial clustering at massive scale
Scala
94
star
15

jupyterlab-integration

DEPRECATED: Integrating Jupyter with Databricks via SSH
HTML
71
star
16

smolder

HL7 Apache Spark Datasource
Scala
61
star
17

feature-factory

Accelerator to rapidly deploy customized features for your business
Python
55
star
18

databricks-sync

An experimental tool to synchronize source Databricks deployment with a target Databricks deployment.
Python
46
star
19

doc-qa

Python
45
star
20

transpiler

SIEM-to-Spark Transpiler
Scala
42
star
21

brickster

R Toolkit for Databricks
R
40
star
22

delta-oms

DeltaOMS is a solution that help build a centralized repository of Delta Transaction logs and associated operational metrics/statistics for your Delta Lakehouse. Unity Catalog supported in the v0.7.0-rc1 release.Documentation here - https://databrickslabs.github.io/delta-oms/v0.7.0-rc1/
Scala
38
star
23

pytester

Python Testing for Databricks
Python
35
star
24

remorph

Cross-compiler and Data Reconciler into Databricks Lakehouse
Scala
33
star
25

splunk-integration

Databricks Add-on for Splunk
Python
26
star
26

dbignite

Python
24
star
27

arcuate

Delta Sharing + MLflow for ML model & experiment exchange (arcuate delta - a fan shaped river delta)
Python
22
star
28

databricks-sdk-r

Databricks SDK for R (Experimental)
R
19
star
29

tika-ocr

Rich Text Format
17
star
30

sandbox

Experimental or low-maturity things
Go
16
star
31

blueprint

Baseline for Databricks Labs projects written in Python
Python
16
star
32

delta-sharing-java-connector

A Java connector for delta.io/sharing/ that allows you to easily ingest data on any JVM.
Java
13
star
33

partner-connect-api

Scala
12
star
34

pylint-plugin

Databricks Plugin for PyLint
Python
10
star
35

lsql

Lightweight SQL execution wrapper only on top of Databricks SDK
Python
9
star
36

waterbear

Automated provisioning of an industry Lakehouse with enterprise data model
Python
8
star