• Stars
    star
    262
  • Rank 155,245 (Top 4 %)
  • Language
    Jupyter Notebook
  • License
    Other
  • Created almost 3 years ago
  • Updated 3 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

An extension to the Apache Spark framework that allows easy and fast processing of very large geospatial datasets.

Mosaic by Databricks Labs

mosaic-logo

An extension to the Apache Spark framework that allows easy and fast processing of very large geospatial datasets.

PyPI version PyPI - Downloads codecov build docs Language grade: Python Code style: black

Why Mosaic?

Mosaic was created to simplify the implementation of scalable geospatial data pipelines by bounding together common Open Source geospatial libraries via Apache Spark, with a set of examples and best practices for common geospatial use cases.

What does it provide?

Mosaic provides geospatial tools for

mosaic-general-pipeline

The supported languages are Scala, Python, R, and SQL.

How does it work?

The Mosaic library is written in Scala to guarantee maximum performance with Spark and when possible, it uses code generation to give an extra performance boost.

The other supported languages (Python, R and SQL) are thin wrappers around the Scala code.

mosaic-logical-design Image1: Mosaic logical design.

Getting started

We recommend using Databricks Runtime versions 11.3 LTS or 12.2 LTS with Photon enabled; this will leverage the Databricks h3 expressions when using H3 grid system.

โš ๏ธ Mosaic 0.3.x series does not support DBR 13.x (coming soon with Mosaic 0.4.x series); also, DBR 10 is no longer supported in Mosaic.

As of the 0.3.11 release, Mosaic issues the following warning when initialized on a cluster that is neither Photon Runtime nor Databricks Runtime ML [ADB | AWS | GCP]:

DEPRECATION WARNING: Mosaic is not supported on the selected Databricks Runtime. Mosaic will stop working on this cluster from version v0.4.0+. Please use a Databricks Photon-enabled Runtime (for performance benefits) or Runtime ML (for spatial AI benefits).

If you are receiving this warning in v0.3.11, you will want to change to a supported runtime prior to updating Mosaic to run 0.4.0. The reason we are making this change is that we are streamlining Mosaic internals to be more aligned with future product APIs which are powered by Photon. Along this direction of change, Mosaic will be standardizing to JTS as its default and supported Vector Geometry Provider.

Documentation

Check out the documentation pages.

Python

Install databricks-mosaic as a cluster library, or run from a Databricks notebook

%pip install databricks-mosaic

Then enable it with

from mosaic import enable_mosaic
enable_mosaic(spark, dbutils)

Scala

Get the jar from the releases page and install it as a cluster library.

Then enable it with

import com.databricks.labs.mosaic.functions.MosaicContext
import com.databricks.labs.mosaic.H3
import com.databricks.labs.mosaic.JTS

val mosaicContext = MosaicContext.build(H3, JTS)
import mosaicContext.functions._

R

Get the Scala JAR and the R from the releases page. Install the JAR as a cluster library, and copy the sparkrMosaic.tar.gz to DBFS (This example uses /FileStore location, but you can put it anywhere on DBFS).

library(SparkR)

install.packages('/FileStore/sparkrMosaic.tar.gz', repos=NULL)

Enable the R bindings

library(sparkrMosaic)
enableMosaic()

SQL

Configure the Automatic SQL Registration or follow the Scala installation process and register the Mosaic SQL functions in your SparkSession from a Scala notebook cell:

%scala
import com.databricks.labs.mosaic.functions.MosaicContext
import com.databricks.labs.mosaic.H3
import com.databricks.labs.mosaic.JTS

val mosaicContext = MosaicContext.build(H3, JTS)
mosaicContext.register(spark)

Examples

Example Description Links
Quick Start Example of performing spatial point-in-polygon joins on the NYC Taxi dataset python, scala, R, SQL
Spatial KNN Runnable notebook-based example using Mosaic SpatialKNN model python
Open Street Maps Ingesting and processing with Delta Live Tables the Open Street Maps dataset to extract buildings polygons and calculate aggregation statistics over H3 indexes python
STS Transfers Detecting Ship-to-Ship transfers at scale by leveraging Mosaic to process AIS data. python, blog

You can import those examples in Databricks workspace using these instructions.

Ecosystem

Mosaic is intended to augment the existing system and unlock the potential by integrating spark, delta and 3rd party frameworks into the Lakehouse architecture.

mosaic-logo Image2: Mosaic ecosystem - Lakehouse integration.

Project Support

Please note that all projects in the databrickslabs github space are provided for your exploration only, and are not formally supported by Databricks with Service Level Agreements (SLAs). They are provided AS-IS and we do not make any guarantees of any kind. Please do not submit a support ticket relating to any issues arising from the use of these projects.

Any issues discovered through the use of this project should be filed as GitHub Issues on the Repo. They will be reviewed as time permits, but there are no formal SLAs for support.

More Repositories

1

dolly

Databricksโ€™ Dolly, a large language model trained on the Databricks Machine Learning Platform
Python
10,796
star
2

pyspark-ai

English SDK for Apache Spark
Python
739
star
3

dbx

๐Ÿงฑ Databricks CLI eXtensions - aka dbx is a CLI tool for development and advanced Databricks workflows management.
Python
437
star
4

tempo

API for manipulating time series on top of Apache Spark: lagged time values, rolling statistics (mean, avg, sum, count, etc), AS OF joins, downsampling, and interpolation
Jupyter Notebook
303
star
5

dbldatagen

Generate relevant synthetic data quickly for your projects. The Databricks Labs synthetic data generator (aka `dbldatagen`) may be used to generate large simulated / synthetic data sets for test, POCs, and other uses in Databricks environments including in Delta Live Tables pipelines
Python
281
star
6

overwatch

Capture deep metrics on one or all assets within a Databricks workspace
Scala
221
star
7

cicd-templates

Manage your Databricks deployments and CI with code.
Python
200
star
8

ucx

Your best companion for upgrading to Unity Catalog. UCX will guide you, the Databricks customer, through the process of upgrading your account, groups, workspaces, jobs etc. to Unity Catalog.
Python
193
star
9

automl-toolkit

Toolkit for Apache Spark ML for Feature clean-up, feature Importance calculation suite, Information Gain selection, Distributed SMOTE, Model selection and training, Hyper parameter optimization and selection, Model interprability.
HTML
190
star
10

migrate

Old scripts for one-off ST-to-E2 migrations. Use "terraform exporter" linked in the readme.
Python
177
star
11

dataframe-rules-engine

Extensible Rules Engine for custom Dataframe / Dataset validation
Scala
134
star
12

dlt-meta

This is metadata driven DLT based framework for bronze/silver pipelines
Python
125
star
13

discoverx

A Swiss-Army-knife for your Data Intelligence platform administration.
Python
99
star
14

geoscan

Geospatial clustering at massive scale
Scala
92
star
15

jupyterlab-integration

DEPRECATED: Integrating Jupyter with Databricks via SSH
HTML
71
star
16

smolder

HL7 Apache Spark Datasource
Scala
57
star
17

feature-factory

Accelerator to rapidly deploy customized features for your business
Python
55
star
18

databricks-sync

An experimental tool to synchronize source Databricks deployment with a target Databricks deployment.
Python
46
star
19

doc-qa

Python
42
star
20

transpiler

SIEM-to-Spark Transpiler
Scala
41
star
21

delta-oms

DeltaOMS is a solution that help build a centralized repository of Delta Transaction logs and associated operational metrics/statistics for your Delta Lakehouse. Unity Catalog supported in the v0.7.0-rc1 release.Documentation here - https://databrickslabs.github.io/delta-oms/v0.7.0-rc1/
Scala
37
star
22

brickster

R Toolkit for Databricks
R
36
star
23

splunk-integration

Databricks Add-on for Splunk
Python
26
star
24

dbignite

Python
22
star
25

arcuate

Delta Sharing + MLflow for ML model & experiment exchange (arcuate delta - a fan shaped river delta)
Python
21
star
26

databricks-sdk-r

Databricks SDK for R (Experimental)
R
20
star
27

remorph

Cross-compiler and Data Reconciler into Databricks Lakehouse
Python
18
star
28

tika-ocr

Rich Text Format
17
star
29

sandbox

Experimental or low-maturity things
Go
16
star
30

blueprint

Baseline for Databricks Labs projects written in Python
Python
13
star
31

delta-sharing-java-connector

A Java connector for delta.io/sharing/ that allows you to easily ingest data on any JVM.
Java
12
star
32

partner-connect-api

Scala
12
star
33

waterbear

Automated provisioning of an industry Lakehouse with enterprise data model
Python
8
star
34

pylint-plugin

Databricks Plugin for PyLint
Python
8
star
35

lsql

Lightweight SQL execution wrapper only on top of Databricks SDK
Python
6
star