• This repository has been archived on 13/Jul/2022
  • Stars
    star
    191
  • Rank 202,877 (Top 4 %)
  • Language
    HTML
  • License
    Other
  • Created over 5 years ago
  • Updated over 3 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Toolkit for Apache Spark ML for Feature clean-up, feature Importance calculation suite, Information Gain selection, Distributed SMOTE, Model selection and training, Hyper parameter optimization and selection, Model interprability.

Databricks Labs AutoML Toolkit

Release Notes | Python API Docs | Python Artifact | Developer Docs | Python Docs | Analysis Tools Docs | Demo | Release Artifacts | Contributors

This Databricks Labs project is a non-officially-supported end-to-end supervised learning solution for automating:

  • Feature clean-up
    • Advanced NA fill, covariance calculations, collinearity determination, outlier filtering, and data casting
  • Feature Importance calculation suite
    • RandomForest or XGBoost determinations
  • Feature Interaction with Information Gain selection
  • Feature vectorization
  • Advanced train/test split techniques (including Distributed SMOTE (KSample))
  • Model selection and training
  • Hyper parameter optimization and selection
    • Hyperspace, Genetic, and MBO-based selection
  • Batch Prediction through serialized SparkML Pipelines
  • Logging of model results and training runs (using MLFlow)
  • Model interprability (including distributed Shapley Values )

This package utilizes Apache Spark ML and currently supports the following model family types:

  • Decision Trees (Regressor and Classifier)
  • Gradient Boosted Trees (Regressor and Classifier)
  • Random Forest (Regressor and Classifier)
  • Linear Regression
  • Logistic Regression
  • Multi-Layer Perceptron Classifier
  • Support Vector Machines
  • XGBoost (Regressor and Classifier)

NOTE: With the upgrade to Spark 3 (Scala 2.12) LightGBM is no longer supported but will be added in a future release.

Documentation

Scala API documentation can be found here

Python API documentation can be found here

Analytics Package API documentation can be found here

Installing - Recommended!

Darabricks Labs AutoML can be pulled from maven central with the following coordinates. Example - to install 0.7.2 AutoML:

<dependency>
  <groupId>com.databricks.labs</groupId>
  <artifactId>automl-toolkit_2.12</artifactId>
  <version>0.8.1</version>
</dependency>

Building

Databricks Labs AutoML can be build with either SBT or Maven.

This package requires Java 1.8.x  and scala 2.12.x to be installed on your system prior to building.

After cloning this repo onto your local system, navigate to the root directory and execute either:

Maven Build
mvn clean install -DskipTests
SBT Build
sbt package

If there is any StackOverflowError during the build, adjust the stack size on your computer's JVM. Example:

#For Maven
export MAVEN_OPTS=-Xss2m
#For SBT
export SBT_OPTS="-Xss2M"

This will skip unit test execution (it is not recommended to run unit tests in local mode against this package as unit testing is asynchronous and incredibly CPU intensive for this code base.)

Setup

Once the artifact has been built, attach to the Databricks Shard through either the DBFS API or the GUI. Once loaded into the account, utilize either the Libraries API to attach to a cluster, or utilize the GUI to attach the .jar to the cluster.

NOTE: It is not recommended to attach this libarary to all clusters on the account.  

Use of an ML Runtime cluster configuration is highly advised to ensure that custom management of dependent 
libraries and configurations are provided 'out of the box'

Attach the following libraries to the cluster:

  • The automl toolkit jar created above. (automatedml_2.12-((version)).jar)
  • If using the PySpark API for the toolkit, the .whl file for the PySpark API.

IMPORTANT NOTE: as of release 0.7.1, the mlflow libraries in pypi and Maven are NO LONGER NEEDED. Attaching them to your cluster WILL prevent the run from logging and will throw an exception. DO NOT ATTACH EITHER OF THEM.

Getting Started

This package provides a number of different levels of API interaction, from the highest-level "default only" FamilyRunner to low-level APIs that allow for highly customizable workflows to be created for automated ML tuning and Inference.

Since v0.6.0 we have included an API to work with the pipeline semantics around feature engineering steps and full predict pipelines.For the purposes of a quick-start intro, the below example is of the highest-level API access point.

import com.databricks.labs.automl.executor.config.ConfigurationGenerator
import com.databricks.labs.automl.executor.FamilyRunner
import org.apache.spark.ml.PipelineModel

val data = spark.table("ben_demo.adult_data")
val overrides = Map(
  "labelCol" -> "label",
  "mlFlowLoggingFlag" -> false,
  "scalingFlag" -> true,
  "oneHotEncodeFlag" -> true,
  "pipelineDebugFlag" -> true
)
val randomForestConfig = ConfigurationGenerator
        .generateConfigFromMap("RandomForest", "classifier", overrides)

val runner = FamilyRunner(data, Array(randomForestConfig)).executeWithPipeline()

runner.bestPipelineModel("RandomForest").transform(data)

//Serialize it
runner.bestPipelineModel("RandomForest").write.overwrite().save("tmp/predict-pipeline-1")

// Load it for running inference
val pipelineModel = PipelineModel.load("tmp/predict-pipeline-1")
val predictDf = pipelineModel.transform(data)

This example will take the default configuration for all of the application parameters (excepting the overridden parameters in overrides Map) and execute Data Preparation tasks, Feature Vectorization, and automatic tuning of all 3 specified model types. At the conclusion of each run, the results and model artifacts will be logged to the mlflow location that was specified in the configuration.

For a listing of all available parameter overrides and their functionality, see the Developer Docs

Inference via Mlflow Run ID

It is also possible to use MlFlow Run ID for inference, if Mlflow logging is turned on during training. For usage, see this

For all available pipeline APIs. please see Developer Docs

Feedback

Issues with the application? Found a bug? Have a great idea for an addition? Feel free to file an issue or contact Ben

Contributing

Have a great idea that you want to add? Fork the repo and submit a PR!

Legal Information

This software is provided as-is and is not officially supported by Databricks through customer technical support channels. Support, questions, and feature requests can be communicated via email -> [email protected] or through the Issues page of this repo. Please see the legal agreement and understand that issues with the use of this code will not be answered or investigated by Databricks Support.

Core Contribution team

  • Lead Developer: Ben Wilson, Practice Leader, Databricks
  • Developer: Daniel Tomes, RSA Practice Leader, Databricks
  • Developer: Jas Bali, Sr. Solutions Consultant, Databricks
  • Developer: Mary Grace Moesta, Customer Success Engineer, Databricks
  • Developer: Nick Senno, Resident Solutions Architect, Databricks

More Repositories

1

dolly

Databricks’ Dolly, a large language model trained on the Databricks Machine Learning Platform
Python
10,811
star
2

pyspark-ai

English SDK for Apache Spark
Python
739
star
3

dbx

🧱 Databricks CLI eXtensions - aka dbx is a CLI tool for development and advanced Databricks workflows management.
Python
440
star
4

dbldatagen

Generate relevant synthetic data quickly for your projects. The Databricks Labs synthetic data generator (aka `dbldatagen`) may be used to generate large simulated / synthetic data sets for test, POCs, and other uses in Databricks environments including in Delta Live Tables pipelines
Python
313
star
5

tempo

API for manipulating time series on top of Apache Spark: lagged time values, rolling statistics (mean, avg, sum, count, etc), AS OF joins, downsampling, and interpolation
Jupyter Notebook
306
star
6

mosaic

An extension to the Apache Spark framework that allows easy and fast processing of very large geospatial datasets.
Jupyter Notebook
270
star
7

overwatch

Capture deep metrics on one or all assets within a Databricks workspace
Scala
226
star
8

ucx

Automated migrations to Unity Catalog
Python
220
star
9

cicd-templates

Manage your Databricks deployments and CI with code.
Python
202
star
10

migrate

Old scripts for one-off ST-to-E2 migrations. Use "terraform exporter" linked in the readme.
Python
186
star
11

dlt-meta

Metadata driven Databricks Delta Live Tables framework for bronze/silver pipelines
Python
147
star
12

dataframe-rules-engine

Extensible Rules Engine for custom Dataframe / Dataset validation
Scala
134
star
13

discoverx

A Swiss-Army-knife for your Data Intelligence platform administration.
Python
105
star
14

geoscan

Geospatial clustering at massive scale
Scala
94
star
15

jupyterlab-integration

DEPRECATED: Integrating Jupyter with Databricks via SSH
HTML
71
star
16

smolder

HL7 Apache Spark Datasource
Scala
61
star
17

feature-factory

Accelerator to rapidly deploy customized features for your business
Python
55
star
18

databricks-sync

An experimental tool to synchronize source Databricks deployment with a target Databricks deployment.
Python
46
star
19

doc-qa

Python
45
star
20

transpiler

SIEM-to-Spark Transpiler
Scala
42
star
21

brickster

R Toolkit for Databricks
R
40
star
22

delta-oms

DeltaOMS is a solution that help build a centralized repository of Delta Transaction logs and associated operational metrics/statistics for your Delta Lakehouse. Unity Catalog supported in the v0.7.0-rc1 release.Documentation here - https://databrickslabs.github.io/delta-oms/v0.7.0-rc1/
Scala
38
star
23

pytester

Python Testing for Databricks
Python
35
star
24

remorph

Cross-compiler and Data Reconciler into Databricks Lakehouse
Scala
33
star
25

splunk-integration

Databricks Add-on for Splunk
Python
26
star
26

dbignite

Python
24
star
27

arcuate

Delta Sharing + MLflow for ML model & experiment exchange (arcuate delta - a fan shaped river delta)
Python
22
star
28

databricks-sdk-r

Databricks SDK for R (Experimental)
R
19
star
29

tika-ocr

Rich Text Format
17
star
30

sandbox

Experimental or low-maturity things
Go
16
star
31

blueprint

Baseline for Databricks Labs projects written in Python
Python
16
star
32

delta-sharing-java-connector

A Java connector for delta.io/sharing/ that allows you to easily ingest data on any JVM.
Java
13
star
33

partner-connect-api

Scala
12
star
34

pylint-plugin

Databricks Plugin for PyLint
Python
10
star
35

lsql

Lightweight SQL execution wrapper only on top of Databricks SDK
Python
9
star
36

waterbear

Automated provisioning of an industry Lakehouse with enterprise data model
Python
8
star