• Stars
    star
    415
  • Rank 104,301 (Top 3 %)
  • Language
    Python
  • License
    Apache License 2.0
  • Created over 2 years ago
  • Updated 4 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

This repo provides a customizable stack for starting new ML projects on Databricks that follow production best-practices out of the box.

Databricks MLOps Stack

NOTE: This feature is in private preview. The interface/APIs may change and no formal support is available during the preview. However, you can still create new production-grade ML projects using the stack. If interested in trying it out, please fill out this form, and you’ll be contacted by a Databricks representative.

This repo provides a customizable stack for starting new ML projects on Databricks that follow production best-practices out of the box.

The default stack in this repo includes three modular components:

Component Description Why it's useful
ML Code Example ML project structure, with unit tested Python modules and notebooks Quickly iterate on ML problems, without worrying about refactoring your code into tested modules for productionization later on.
ML Resource Configs as Code ML pipeline resources (training and batch inference jobs, etc) defined through databricks CLI bundles Govern, audit, and deploy changes to your ML resources (e.g. "use a larger instance type for automated model retraining") through pull requests, rather than adhoc changes made via UI.
CI/CD GitHub Actions or Azure DevOps workflows to test and deploy ML code and resources Ship ML code faster and with confidence: ensure all production changes are performed through automation and that only tested code is deployed to prod

Your organization can use the default stack as is or customize it as needed, e.g. to add/remove components or adapt individual components to fit your organization's best practices. See the stack customization guide for more details.

Using Databricks MLOps stacks, data scientists can quickly get started iterating on ML code for new projects while ops engineers set up CI/CD and ML service state management, with an easy transition to production. You can also use MLOps stacks as a building block in automation for creating new data science projects with production-grade CI/CD pre-configured.

MLOps Stacks diagram

See the FAQ for questions on common use cases.

ML pipeline structure and devloop

See this page for detailed description and diagrams of the ML pipeline structure defined in the default stack.

Using this stack

Prerequisites

$ pip install 'cookiecutter>=2.1.0'

Starting a new project

To create a new project, run:

cookiecutter https://github.com/databricks/mlops-stack

This will prompt for parameters for project initialization. Some of these parameters are required to get started:

  • project_name: name of the current project
  • root_dir__update_if_you_intend_to_use_monorepo: name of the root directory. The root directory name should be specified if monorepo will be used for the project. Otherwise, user can leave it blank to use the name of the current project as root directory name.
  • cloud: Cloud provider you use with Databricks (AWS, Azure, or GCP)
  • cicd_platform : CI/CD platform of choice (GitHub Actions or GitHub Actions for GitHub Enterprise Servers or Azure DevOps)

Others must be correctly specified for CI/CD to work, and so can be left at their default values until you're ready to productionize a model. We recommend specifying any known parameters upfront (e.g. if you know databricks_staging_workspace_host, it's better to specify it upfront):

  • databricks_staging_workspace_host: URL of staging Databricks workspace, used to run CI tests on PRs and preview config changes before they're deployed to production. We encourage granting data scientists working on the current ML project non-admin (read) access to this workspace, to enable them to view and debug CI test results
  • databricks_prod_workspace_host: URL of production Databricks workspace. We encourage granting data scientists working on the current ML project non-admin (read) access to this workspace, to enable them to view production job status and see job logs to debug failures.
  • default_branch: Name of the default branch, where the prod and staging ML resources are deployed from and the latest ML code is staged.
  • release_branch: Name of the release branch. The production jobs (model training, batch inference) defined in this repo pull ML code from this branch.
  • read_user_group: User group name to give READ permissions to for project resources (ML jobs, integration test job runs, and machine learning resources). A group with this name must exist in both the staging and prod workspaces. Defaults to "users", which grants read permission to all users in the staging/prod workspaces. You can specify a custom group name e.g. to restrict read permissions to members of the team working on the current ML project.
  • include_feature_store: If selected, will provide Databricks Feature Store stack components including: project structure and sample feature Python modules, feature engineering notebooks, ML resource configs to provision and manage Feature Store jobs, and automated integration tests covering feature engineering and training.
  • include_mlflow_recipes: If selected, will provide MLflow Recipes stack components, dividing the training pipeline into configurable steps and profiles.

See the generated README.md for next steps!

FAQ

Do I need separate dev/staging/prod workspaces to use this stack?

We recommend using separate dev/staging/prod Databricks workspaces for stronger isolation between environments. For example, Databricks REST API rate limits are applied per-workspace, so if using Databricks Model Serving, using separate workspaces can help prevent high load in staging from DOSing your production model serving endpoints.

However, you can run the stack against just a single workspace, against a dev and staging/prod workspace, etc. Just supply the same workspace URL for databricks_staging_host and databricks_prod_host. If you go this route, we recommend using different service principals to manage staging vs prod resources, to ensure that CI workloads run in staging cannot interfere with production resources.

I have an existing ML project. Can I productionize it using this stack?

Yes. Currently, you can instantiate a new project from the stack and copy relevant components into your existing project to productionize it. The stack is modularized, so you can e.g. copy just the GitHub Actions workflows under .github or ML resource configs under {{cookiecutter.root_dir__update_if_you_intend_to_use_monorepo}}/{{cookiecutter.project_name_alphanumeric_underscore}}/databricks-resources and {{cookiecutter.root_dir__update_if_you_intend_to_use_monorepo}}/{{cookiecutter.project_name_alphanumeric_underscore}}/bundle.yml into your existing project.

Can I adopt individual components of the stack?

For this use case, we recommend instantiating the full stack via cookiecutter and copying the relevant stack subdirectories. For example, all ML resource configs are defined under {{cookiecutter.root_dir__update_if_you_intend_to_use_monorepo}}/{{cookiecutter.project_name_alphanumeric_underscore}}/databricks-resources and {{cookiecutter.root_dir__update_if_you_intend_to_use_monorepo}}/{{cookiecutter.project_name_alphanumeric_underscore}}/bundle.yml, while CI/CD is defined e.g. under .github if using GitHub Actions, or under .azure if using Azure DevOps.

Can I customize this stack?

Yes. We provide the default stack in this repo as a production-friendly starting point for MLOps. However, in many cases you may need to customize the stack to match your organization's best practices. See the stack customization guide for details on how to do this.

Does the MLOps stack cover data (ETL) pipelines?

Since MLOps Stacks is based on databricks CLI bundles, it's not limited only to ML workflows and assets - it works for assets across the Databricks Lakehouse. For instance, while the existing ML code samples contain feature engineering, training, model validation, deployment and batch inference workflows, you can use it for Delta Live Tables pipelines as well.

How can I provide feedback?

Please provide feedback (bug reports, feature requests, etc) via GitHub issues.

Contributing

We welcome community contributions. For substantial changes, we ask that you first file a GitHub issue to facilitate discussion, before opening a pull request.

This stack is implemented as a cookiecutter template that generates new projects given user-supplied parameters. Parametrized project code can be found under the {{cookiecutter.root_dir__update_if_you_intend_to_use_monorepo}} directory.

Installing development requirements

To run tests, install actionlint, databricks CLI, npm, and act, then install the Python dependencies listed in dev-requirements.txt:

pip install -r dev-requirements.txt

Running the tests

NOTE: This section is for open-source developers contributing to the default stack in this repo. If you are working on an ML project using the stack (e.g. if you ran cookiecutter to start a new project), see the README.md within your generated project directory for detailed instructions on how to make and test changes.

Run unit tests:

pytest tests

Run all tests (unit and slower integration tests):

pytest tests --large

Run integration tests only:

pytest tests --large-only

Previewing stack changes

When making changes to the stack, it can be convenient to see how those changes affect an actual new ML project created from the stack. To do this, you can create an example project from your local checkout of the stack, and inspect its contents/run tests within the project.

We provide example project configs for Azure (using both GitHub and Azure DevOps) and AWS (using GitHub) under tests/example-project-configs. To create an example Azure project, using Azure DevOps as the CI/CD platform, run the following from the desired parent directory of the example project:

# Note: update MLOPS_STACK_PATH to the path to your local checkout of the stack
MLOPS_STACK_PATH=~/mlops-stack
cookiecutter "$MLOPS_STACK_PATH" --config-file "$MLOPS_STACK_PATH/tests/example-project-configs/azure/azure-devops.yaml" --no-input --overwrite-if-exists

To create an example AWS project, using GitHub Actions for CI/CD, run:

# Note: update MLOPS_STACK_PATH to the path to your local checkout of the stack
MLOPS_STACK_PATH=~/mlops-stack
cookiecutter "$MLOPS_STACK_PATH" --config-file "$MLOPS_STACK_PATH/tests/example-project-configs/aws/aws-github.yaml" --no-input --overwrite-if-exists

More Repositories

1

learning-spark

Example code from Learning Spark book
Java
3,864
star
2

koalas

Koalas: pandas API on Apache Spark
Python
3,330
star
3

Spark-The-Definitive-Guide

Spark: The Definitive Guide's Code Repository
Scala
2,678
star
4

scala-style-guide

Databricks Scala Coding Style Guide
2,673
star
5

spark-deep-learning

Deep Learning Pipelines for Apache Spark
Python
1,993
star
6

click

The "Command Line Interactive Controller for Kubernetes"
Rust
1,416
star
7

LearningSparkV2

This is the github repo for Learning Spark: Lightning-Fast Data Analytics [2nd Edition]
Scala
1,158
star
8

megablocks

Python
1,147
star
9

spark-sklearn

(Deprecated) Scikit-learn integration package for Apache Spark
Python
1,080
star
10

spark-csv

CSV Data Source for Apache Spark 1.x
Scala
1,051
star
11

tensorframes

[DEPRECATED] Tensorflow wrapper for DataFrames on Apache Spark
Scala
749
star
12

devrel

This repository contains the notebooks and presentations we use for our Databricks Tech Talks
HTML
687
star
13

reference-apps

Spark reference applications
Scala
648
star
14

spark-redshift

Redshift data source for Apache Spark
Scala
598
star
15

spark-sql-perf

Scala
543
star
16

spark-avro

Avro Data Source for Apache Spark
Scala
538
star
17

spark-xml

XML data source for Spark SQL and DataFrames
Scala
501
star
18

spark-corenlp

Stanford CoreNLP wrapper for Apache Spark
Scala
424
star
19

spark-training

Apache Spark training material
Scala
396
star
20

databricks-cli

(Legacy) Command Line Interface for Databricks
Python
384
star
21

spark-perf

Performance tests for Apache Spark
Scala
372
star
22

delta-live-tables-notebooks

Python
334
star
23

terraform-provider-databricks

Databricks Terraform Provider
Go
333
star
24

spark-knowledgebase

Spark Knowledge Base
328
star
25

databricks-ml-examples

Python
284
star
26

sjsonnet

Scala
266
star
27

jsonnet-style-guide

Databricks Jsonnet Coding Style Guide
205
star
28

dbt-databricks

A dbt adapter for Databricks.
Python
199
star
29

databricks-sdk-py

Databricks SDK for Python (Beta)
Python
185
star
30

containers

Sample base images for Databricks Container Services
Dockerfile
163
star
31

databricks-sql-python

Databricks SQL Connector for Python
Python
158
star
32

sbt-spark-package

Sbt plugin for Spark packages
Scala
150
star
33

notebook-best-practices

An example showing how to apply software engineering best practices to Databricks notebooks.
Python
116
star
34

databricks-vscode

VS Code extension for Databricks
TypeScript
114
star
35

benchmarks

A place in which we publish scripts for reproducible benchmarks.
Python
106
star
36

terraform-databricks-examples

Examples of using Terraform to deploy Databricks resources
HCL
103
star
37

spark-tfocs

A Spark port of TFOCS: Templates for First-Order Conic Solvers (cvxr.com/tfocs)
Scala
88
star
38

intellij-jsonnet

Intellij Jsonnet Plugin
Java
82
star
39

sbt-databricks

An sbt plugin for deploying code to Databricks Cloud
Scala
71
star
40

terraform-databricks-lakehouse-blueprints

Set of Terraform automation templates and quickstart demos to jumpstart the design of a Lakehouse on Databricks. This project has incorporated best practices across the industries we work with to deliver composable modules to build a workspace to comply with the highest platform security and governance standards.
Python
71
star
41

spark-integration-tests

Integration tests for Spark
Scala
68
star
42

genai-cookbook

Python
63
star
43

spark-pr-dashboard

Dashboard to aid in Spark pull request reviews
JavaScript
54
star
44

run-notebook

TypeScript
47
star
45

ide-best-practices

Best practices for working with Databricks from an IDE
Python
47
star
46

unity-catalog-setup

Notebooks, terraform, tools to enable setting up Unity Catalog
45
star
47

simr

Spark In MapReduce (SIMR) - launching Spark applications on existing Hadoop MapReduce infrastructure
Java
44
star
48

devbox

Scala
38
star
49

databricks-sql-go

Golang database/sql driver for Databricks SQL.
Go
35
star
50

diviner

Grouped time series forecasting engine
Python
33
star
51

cli

Databricks CLI
Go
32
star
52

tmm

Python
30
star
53

security-bucket-brigade

JavaScript
30
star
54

databricks-sdk-go

Databricks SDK for Go
Go
29
star
55

pig-on-spark

proof-of-concept implementation of Pig-on-Spark integrated at the logical node level
Scala
28
star
56

databricks-sql-cli

CLI for querying Databricks SQL
Python
27
star
57

automl

Python
26
star
58

databricks-sql-nodejs

Databricks SQL Connector for Node.js
TypeScript
24
star
59

tpch-dbgen

Patched version of dbgen
C
22
star
60

als-benchmark-scripts

Scripts to benchmark distributed Alternative Least Squares (ALS)
Scala
22
star
61

spark-package-cmd-tool

A command line tool for Spark packages
Python
18
star
62

congruity

The goal of this library is to provide a compatibility layer that makes it easier to adopt Spark Connect. The library is designed to be simply imported in your application and will then monkey-patch the existing API to provide the legacy functionality.
Python
16
star
63

python-interview

Databricks Python interview setup instructions
15
star
64

xgb-regressor

MLflow XGBoost Regressor
Python
15
star
65

databricks-accelerators

Accelerate the use of Databricks for customers [public repo]
Python
15
star
66

tableau-connector

Scala
12
star
67

files_in_repos

Python
12
star
68

upload-dbfs-temp

TypeScript
12
star
69

spark-sklearn-docs

HTML
11
star
70

sqltools-databricks-driver

SQLTools driver for Databricks SQL
TypeScript
11
star
71

genomics-pipelines

secondary analysis pipelines parallelized with apache spark
Scala
10
star
72

workflows-examples

10
star
73

databricks-sdk-java

Databricks SDK for Java
Java
10
star
74

dais-cow-bff

Code for the "Path to Production" DAIS 2024 and 2023 talks
Jupyter Notebook
8
star
75

xgboost-linux64

Databricks Private xgboost Linux64 fork
C++
8
star
76

mlflow-example-sklearn-elasticnet-wine

Jupyter Notebook
7
star
77

databricks-ttyd

C
6
star
78

setup-cli

Sets up the Databricks CLI in your GitHub Actions workflow.
Shell
4
star
79

terraform-databricks-mlops-aws-project

This module creates and configures service principals with appropriate permissions and entitlements to run CI/CD for a project, and creates a workspace directory as a container for project-specific resources for the Databricks AWS staging and prod workspaces.
HCL
4
star
80

jenkins-job-builder

Fork of https://docs.openstack.org/infra/jenkins-job-builder/ to include unmerged patches
Python
4
star
81

terraform-databricks-mlops-azure-project-with-sp-creation

This module creates and configures service principals with appropriate permissions and entitlements to run CI/CD for a project, and creates a workspace directory as a container for project-specific resources for the Azure Databricks staging and prod workspaces. It also creates the relevant Azure Active Directory (AAD) applications for the service principals.
HCL
4
star
82

terraform-databricks-sra

The Security Reference Architecture (SRA) implements typical security features as Terraform Templates that are deployed by most high-security organizations, and enforces controls for the largest risks that customers ask about most often.
HCL
4
star
83

databricks-empty-ide-project

Empty IDE project used by the VSCode extension for Databricks
3
star
84

databricks-repos-proxy

Python
2
star
85

databricks-asset-bundles-dais2023

Python
2
star
86

pex

Fork of pantsbuild/pex with a few Databricks-specific changes
Python
2
star
87

SnpEff

Databricks snpeff fork
Java
2
star
88

notebook_gallery

Jupyter Notebook
2
star
89

terraform-databricks-mlops-aws-infrastructure

This module sets up multi-workspace model registry between a Databricks AWS development (dev) workspace, staging workspace, and production (prod) workspace, allowing READ access from dev/staging workspaces to staging & prod model registries.
HCL
2
star
90

expectations

Python
1
star
91

homebrew-tap

Homebrew Tap for the Databricks CLI
Ruby
1
star
92

terraform-databricks-mlops-azure-infrastructure-with-sp-creation

This module sets up multi-workspace model registry between an Azure Databricks development (dev) workspace, staging workspace, and production (prod) workspace, allowing READ access from dev/staging workspaces to staging & prod model registries. It also creates the relevant Azure Active Directory (AAD) applications for the service principals.
HCL
1
star
93

mfg_dlt_workshop

DLT Manufacturing Workshop
Python
1
star
94

databricks-dbutils-scala

The Scala SDK for Databricks.
Scala
1
star
95

kdd24-forecasting-anomaly-detection

Python
1
star
96

terraform-databricks-mlops-azure-project-with-sp-linking

This module creates and configures service principals with appropriate permissions and entitlements to run CI/CD for a project, and creates a workspace directory as a container for project-specific resources for the Azure Databricks staging and prod workspaces. It also links pre-existing Azure Active Directory (AAD) applications to the service principals.
HCL
1
star
97

terraform-databricks-mlops-azure-infrastructure-with-sp-linking

This module sets up multi-workspace model registry between an Azure Databricks development (dev) workspace, staging workspace, and production (prod) workspace, allowing READ access from dev/staging workspaces to staging & prod model registries. It also links pre-existing Azure Active Directory (AAD) applications to the service principals.
HCL
1
star