• Stars
    star
    739
  • Rank 60,916 (Top 2 %)
  • Language
    Python
  • License
    Apache License 2.0
  • Created over 1 year ago
  • Updated about 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

English SDK for Apache Spark

English SDK for Apache Spark

Introduction

The English SDK for Apache Spark is an extremely simple yet powerful tool. It takes English instructions and compile them into PySpark objects like DataFrames. Its goal is to make Spark more user-friendly and accessible, allowing you to focus your efforts on extracting insights from your data.

For a more comprehensive introduction and background to our project, we have the following resources:

  • Blog Post: A detailed walkthrough of our project.
  • Demo Video: 2023 Data + AI summit announcement video with demo.
  • Breakout Session: A deep dive into the story behind the English SDK, its features, and future works at DATA+AI summit 2023.

Installation

pip install pyspark-ai

Configuring OpenAI LLMs

As of July 2023, we have found that the GPT-4 works optimally with the English SDK. This superior AI model is readily accessible to all developers through the OpenAI API.

To use OpenAI's Language Learning Models (LLMs), you can set your OpenAI secret key as the OPENAI_API_KEY environment variable. This key can be found in your OpenAI account. Example:

export OPENAI_API_KEY='sk-...'

By default, the SparkAI instances will use the GPT-4 model. However, you're encouraged to experiment with creating and implementing other LLMs, which can be passed during the initialization of SparkAI instances for various use-cases.

Usage

Initialization

from pyspark_ai import SparkAI

spark_ai = SparkAI()
spark_ai.activate()  # active partial functions for Spark DataFrame

You can also pass other LLMs to construct the SparkAI instance. For example, by following this guide:

from langchain.chat_models import AzureChatOpenAI
from pyspark_ai import SparkAI

llm = AzureChatOpenAI(
    deployment_name=...,
    model_name=...
)
spark_ai = SparkAI(llm=llm)
spark_ai.activate()  # active partial functions for Spark DataFrame

As per Microsoft's Data Privacy page, using the Azure OpenAI service can provide better data privacy and security.

DataFrame Transformation

Given the following DataFrame df:

df = spark_ai._spark.createDataFrame(
    [
        ("Normal", "Cellphone", 6000),
        ("Normal", "Tablet", 1500),
        ("Mini", "Tablet", 5500),
        ("Mini", "Cellphone", 5000),
        ("Foldable", "Cellphone", 6500),
        ("Foldable", "Tablet", 2500),
        ("Pro", "Cellphone", 3000),
        ("Pro", "Tablet", 4000),
        ("Pro Max", "Cellphone", 4500)
    ],
    ["product", "category", "revenue"]
)

You can write English to perform transformations. For example:

df.ai.transform("What are the best-selling and the second best-selling products in every category?").show()
product category revenue
Foldable Cellphone 6500
Nromal Cellphone 6000
Mini Tablet 5500
Pro Tablet 4000
df.ai.transform("Pivot the data by product and the revenue for each product").show()
Category Normal Mini Foldable Pro Pro Max
Cellphone 6000 5000 6500 3000 4500
Tablet 1500 5500 2500 4000 null

For a detailed walkthrough of the transformations, please refer to our transform_dataframe.ipynb notebook.

Data Ingestion

If you have set up the Google Python client, you can ingest data via search engine:

auto_df = spark_ai.create_df("2022 USA national auto sales by brand")

Otherwise, you can ingest data via URL:

auto_df = spark_ai.create_df("https://www.carpro.com/blog/full-year-2022-national-auto-sales-by-brand")

Take a look at the data:

auto_df.show(n=5)
rank brand us_sales_2022 sales_change_vs_2021
1 Toyota 1849751 -9
2 Ford 1767439 -2
3 Chevrolet 1502389 6
4 Honda 881201 -33
5 Hyundai 724265 -2

Plot

auto_df.ai.plot()

2022 USA national auto sales by brand

To plot with an instruction:

auto_df.ai.plot("pie chart for US sales market shares, show the top 5 brands and the sum of others")

2022 USA national auto sales_market_share by brand

DataFrame Explanation

auto_top_growth_df.ai.explain()

In summary, this dataframe is retrieving the brand with the highest sales change in 2022 compared to 2021. It presents the results sorted by sales change in descending order and only returns the top result.

DataFrame Attribute Verification

auto_top_growth_df.ai.verify("expect sales change percentage to be between -100 to 100")

result: True

UDF Generation

@spark_ai.udf
def previous_years_sales(brand: str, current_year_sale: int, sales_change_percentage: float) -> int:
    """Calculate previous years sales from sales change percentage"""
    ...
    
spark.udf.register("previous_years_sales", previous_years_sales)
auto_df.createOrReplaceTempView("autoDF")

spark.sql("select brand as brand, previous_years_sales(brand, us_sales, sales_change_percentage) as 2021_sales from autoDF").show()
brand 2021_sales
Toyota 2032693
Ford 1803509
Chevrolet 1417348
Honda 1315225
Hyundai 739045

Cache

The SparkAI supports a simple in-memory and persistent cache system. It keeps an in-memory staging cache, which gets updated for LLM and web search results. The staging cache can be persisted through the commit() method. Cache lookup is always performed on both in-memory staging cache and persistent cache.

spark_ai.commit()

Refer to example.ipynb for more detailed usage examples.

Contributing

We're delighted that you're considering contributing to the English SDK for Apache Spark project! Whether you're fixing a bug or proposing a new feature, your contribution is highly appreciated.

Before you start, please take a moment to read our Contribution Guide. This guide provides an overview of how you can contribute to our project. We're currently in the early stages of development and we're working on introducing more comprehensive test cases and Github Action jobs for enhanced testing of each pull request.

If you have any questions or need assistance, feel free to open a new issue in the GitHub repository.

Thank you for helping us improve the English SDK for Apache Spark. We're excited to see your contributions!

License

Licensed under the Apache License 2.0.

More Repositories

1

dolly

Databricksโ€™ Dolly, a large language model trained on the Databricks Machine Learning Platform
Python
10,796
star
2

dbx

๐Ÿงฑ Databricks CLI eXtensions - aka dbx is a CLI tool for development and advanced Databricks workflows management.
Python
437
star
3

tempo

API for manipulating time series on top of Apache Spark: lagged time values, rolling statistics (mean, avg, sum, count, etc), AS OF joins, downsampling, and interpolation
Jupyter Notebook
303
star
4

dbldatagen

Generate relevant synthetic data quickly for your projects. The Databricks Labs synthetic data generator (aka `dbldatagen`) may be used to generate large simulated / synthetic data sets for test, POCs, and other uses in Databricks environments including in Delta Live Tables pipelines
Python
281
star
5

mosaic

An extension to the Apache Spark framework that allows easy and fast processing of very large geospatial datasets.
Jupyter Notebook
262
star
6

overwatch

Capture deep metrics on one or all assets within a Databricks workspace
Scala
221
star
7

cicd-templates

Manage your Databricks deployments and CI with code.
Python
200
star
8

ucx

Your best companion for upgrading to Unity Catalog. UCX will guide you, the Databricks customer, through the process of upgrading your account, groups, workspaces, jobs etc. to Unity Catalog.
Python
193
star
9

automl-toolkit

Toolkit for Apache Spark ML for Feature clean-up, feature Importance calculation suite, Information Gain selection, Distributed SMOTE, Model selection and training, Hyper parameter optimization and selection, Model interprability.
HTML
190
star
10

migrate

Old scripts for one-off ST-to-E2 migrations. Use "terraform exporter" linked in the readme.
Python
177
star
11

dataframe-rules-engine

Extensible Rules Engine for custom Dataframe / Dataset validation
Scala
134
star
12

dlt-meta

This is metadata driven DLT based framework for bronze/silver pipelines
Python
125
star
13

discoverx

A Swiss-Army-knife for your Data Intelligence platform administration.
Python
99
star
14

geoscan

Geospatial clustering at massive scale
Scala
92
star
15

jupyterlab-integration

DEPRECATED: Integrating Jupyter with Databricks via SSH
HTML
71
star
16

smolder

HL7 Apache Spark Datasource
Scala
57
star
17

feature-factory

Accelerator to rapidly deploy customized features for your business
Python
55
star
18

databricks-sync

An experimental tool to synchronize source Databricks deployment with a target Databricks deployment.
Python
46
star
19

doc-qa

Python
42
star
20

transpiler

SIEM-to-Spark Transpiler
Scala
41
star
21

delta-oms

DeltaOMS is a solution that help build a centralized repository of Delta Transaction logs and associated operational metrics/statistics for your Delta Lakehouse. Unity Catalog supported in the v0.7.0-rc1 release.Documentation here - https://databrickslabs.github.io/delta-oms/v0.7.0-rc1/
Scala
37
star
22

brickster

R Toolkit for Databricks
R
36
star
23

splunk-integration

Databricks Add-on for Splunk
Python
26
star
24

dbignite

Python
22
star
25

arcuate

Delta Sharing + MLflow for ML model & experiment exchange (arcuate delta - a fan shaped river delta)
Python
21
star
26

databricks-sdk-r

Databricks SDK for R (Experimental)
R
20
star
27

remorph

Cross-compiler and Data Reconciler into Databricks Lakehouse
Python
18
star
28

tika-ocr

Rich Text Format
17
star
29

sandbox

Experimental or low-maturity things
Go
16
star
30

blueprint

Baseline for Databricks Labs projects written in Python
Python
13
star
31

delta-sharing-java-connector

A Java connector for delta.io/sharing/ that allows you to easily ingest data on any JVM.
Java
12
star
32

partner-connect-api

Scala
12
star
33

waterbear

Automated provisioning of an industry Lakehouse with enterprise data model
Python
8
star
34

pylint-plugin

Databricks Plugin for PyLint
Python
8
star
35

lsql

Lightweight SQL execution wrapper only on top of Databricks SDK
Python
6
star