• Stars
    star
    313
  • Rank 133,714 (Top 3 %)
  • Language
    Python
  • License
    Other
  • Created over 5 years ago
  • Updated 3 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Generate relevant synthetic data quickly for your projects. The Databricks Labs synthetic data generator (aka `dbldatagen`) may be used to generate large simulated / synthetic data sets for test, POCs, and other uses in Databricks environments including in Delta Live Tables pipelines

Databricks Labs Data Generator (dbldatagen)

Documentation | Release Notes | Examples | Tutorial

build PyPi package codecov PyPi downloads

Project Description

The dbldatagen Databricks Labs project is a Python library for generating synthetic data within the Databricks environment using Spark. The generated data may be used for testing, benchmarking, demos, and many other uses.

It operates by defining a data generation specification in code that controls how the synthetic data is generated. The specification may incorporate the use of existing schemas or create data in an ad-hoc fashion.

It has no dependencies on any libraries that are not already installed in the Databricks runtime, and you can use it from Scala, R or other languages by defining a view over the generated data.

Feature Summary

It supports:

  • Generating synthetic data at scale up to billions of rows within minutes using appropriately sized clusters
  • Generating repeatable, predictable data supporting the need for producing multiple tables, Change Data Capture, merge and join scenarios with consistency between primary and foreign keys
  • Generating synthetic data for all of the Spark SQL supported primitive types as a Spark data frame which may be persisted, saved to external storage or used in other computations
  • Generating ranges of dates, timestamps, and numeric values
  • Generation of discrete values - both numeric and text
  • Generation of values at random and based on the values of other fields (either based on the hash of the underlying values or the values themselves)
  • Ability to specify a distribution for random data generation
  • Generating arrays of values for ML-style feature arrays
  • Applying weights to the occurrence of values
  • Generating values to conform to a schema or independent of an existing schema
  • use of SQL expressions in synthetic data generation
  • plugin mechanism to allow use of 3rd party libraries such as Faker
  • Use within a Databricks Delta Live Tables pipeline as a synthetic data generation source
  • Generate synthetic data generation code from existing schema or data (experimental)

Details of these features can be found in the online documentation - online documentation.

Documentation

Please refer to the online documentation for details of use and many examples.

Release notes and details of the latest changes for this specific release can be found in the GitHub repository here

Installation

Use pip install dbldatagen to install the PyPi package.

Within a Databricks notebook, invoke the following in a notebook cell

%pip install dbldatagen

The Pip install command can be invoked within a Databricks notebook, a Delta Live Tables pipeline and even works on the Databricks community edition.

The documentation installation notes contains details of installation using alternative mechanisms.

Compatibility

The Databricks Labs Data Generator framework can be used with Pyspark 3.1.2 and Python 3.8 or later. These are compatible with the Databricks runtime 9.1 LTS and later releases.

Older prebuilt releases are tested against Pyspark 3.0.1 (compatible with the Databricks runtime 7.3 LTS or later) and built with Python 3.7.5

For full library compatibility for a specific Databricks Spark release, see the Databricks release notes for library compatibility

When using the Databricks Labs Data Generator on "Unity Catalog" enabled environments, the Data Generator requires the use of Single User or No Isolation Shared access modes as some needed features are not available in Shared mode (for example, use of 3rd party libraries). Depending on settings, the Custom access mode may be supported.

See the following documentation for more information:

Using the Data Generator

To use the data generator, install the library using the %pip install method or install the Python wheel directly in your environment.

Once the library has been installed, you can use it to generate a data frame composed of synthetic data.

For example

import dbldatagen as dg
from pyspark.sql.types import IntegerType, FloatType, StringType
column_count = 10
data_rows = 1000 * 1000
df_spec = (dg.DataGenerator(spark, name="test_data_set1", rows=data_rows,
                                                  partitions=4)
           .withIdOutput()
           .withColumn("r", FloatType(), 
                            expr="floor(rand() * 350) * (86400 + 3600)",
                            numColumns=column_count)
           .withColumn("code1", IntegerType(), minValue=100, maxValue=200)
           .withColumn("code2", IntegerType(), minValue=0, maxValue=10)
           .withColumn("code3", StringType(), values=['a', 'b', 'c'])
           .withColumn("code4", StringType(), values=['a', 'b', 'c'], 
                          random=True)
           .withColumn("code5", StringType(), values=['a', 'b', 'c'], 
                          random=True, weights=[9, 1, 1])
 
           )
                            
df = df_spec.build()
num_rows=df.count()                          

Refer to the online documentation for further examples.

The GitHub repository also contains further examples in the examples directory.

Spark and Databricks Runtime Compatibility

The dbldatagen package is intended to be compatible with recent LTS versions of the Databricks runtime, including older LTS versions at least from 10.4 LTS and later. It also aims to be compatible with Delta Live Table runtimes, including current and preview.

While we don't specifically drop support for older runtimes, changes in Pyspark APIs or APIs from dependent packages such as numpy, pandas, pyarrow, and pyparsing make cause issues with older runtimes.

By design, installing dbldatagen does not install releases of dependent packages in order to preserve the curated set of packages pre-installed in any Databricks runtime environment.

When building on local environments, the build process uses the Pipfile and requirements files to determine the package versions for releases and unit tests.

Project Support

Please note that all projects released under Databricks Labs are provided for your exploration only, and are not formally supported by Databricks with Service Level Agreements (SLAs). They are provided AS-IS, and we do not make any guarantees of any kind. Please do not submit a support ticket relating to any issues arising from the use of these projects.

Any issues discovered through the use of this project should be filed as issues on the GitHub Repo.
They will be reviewed as time permits, but there are no formal SLAs for support.

Feedback

Issues with the application? Found a bug? Have a great idea for an addition? Feel free to file an issue.

More Repositories

1

dolly

Databricks’ Dolly, a large language model trained on the Databricks Machine Learning Platform
Python
10,811
star
2

pyspark-ai

English SDK for Apache Spark
Python
739
star
3

dbx

🧱 Databricks CLI eXtensions - aka dbx is a CLI tool for development and advanced Databricks workflows management.
Python
440
star
4

tempo

API for manipulating time series on top of Apache Spark: lagged time values, rolling statistics (mean, avg, sum, count, etc), AS OF joins, downsampling, and interpolation
Jupyter Notebook
306
star
5

mosaic

An extension to the Apache Spark framework that allows easy and fast processing of very large geospatial datasets.
Jupyter Notebook
270
star
6

overwatch

Capture deep metrics on one or all assets within a Databricks workspace
Scala
226
star
7

ucx

Automated migrations to Unity Catalog
Python
220
star
8

cicd-templates

Manage your Databricks deployments and CI with code.
Python
202
star
9

automl-toolkit

Toolkit for Apache Spark ML for Feature clean-up, feature Importance calculation suite, Information Gain selection, Distributed SMOTE, Model selection and training, Hyper parameter optimization and selection, Model interprability.
HTML
191
star
10

migrate

Old scripts for one-off ST-to-E2 migrations. Use "terraform exporter" linked in the readme.
Python
186
star
11

dlt-meta

Metadata driven Databricks Delta Live Tables framework for bronze/silver pipelines
Python
147
star
12

dataframe-rules-engine

Extensible Rules Engine for custom Dataframe / Dataset validation
Scala
134
star
13

discoverx

A Swiss-Army-knife for your Data Intelligence platform administration.
Python
105
star
14

geoscan

Geospatial clustering at massive scale
Scala
94
star
15

jupyterlab-integration

DEPRECATED: Integrating Jupyter with Databricks via SSH
HTML
71
star
16

smolder

HL7 Apache Spark Datasource
Scala
61
star
17

feature-factory

Accelerator to rapidly deploy customized features for your business
Python
55
star
18

databricks-sync

An experimental tool to synchronize source Databricks deployment with a target Databricks deployment.
Python
46
star
19

doc-qa

Python
45
star
20

transpiler

SIEM-to-Spark Transpiler
Scala
42
star
21

brickster

R Toolkit for Databricks
R
40
star
22

delta-oms

DeltaOMS is a solution that help build a centralized repository of Delta Transaction logs and associated operational metrics/statistics for your Delta Lakehouse. Unity Catalog supported in the v0.7.0-rc1 release.Documentation here - https://databrickslabs.github.io/delta-oms/v0.7.0-rc1/
Scala
38
star
23

pytester

Python Testing for Databricks
Python
35
star
24

remorph

Cross-compiler and Data Reconciler into Databricks Lakehouse
Scala
33
star
25

splunk-integration

Databricks Add-on for Splunk
Python
26
star
26

dbignite

Python
24
star
27

arcuate

Delta Sharing + MLflow for ML model & experiment exchange (arcuate delta - a fan shaped river delta)
Python
22
star
28

databricks-sdk-r

Databricks SDK for R (Experimental)
R
19
star
29

tika-ocr

Rich Text Format
17
star
30

sandbox

Experimental or low-maturity things
Go
16
star
31

blueprint

Baseline for Databricks Labs projects written in Python
Python
16
star
32

delta-sharing-java-connector

A Java connector for delta.io/sharing/ that allows you to easily ingest data on any JVM.
Java
13
star
33

partner-connect-api

Scala
12
star
34

pylint-plugin

Databricks Plugin for PyLint
Python
10
star
35

lsql

Lightweight SQL execution wrapper only on top of Databricks SDK
Python
9
star
36

waterbear

Automated provisioning of an industry Lakehouse with enterprise data model
Python
8
star