• Stars
    star
    179
  • Rank 214,039 (Top 5 %)
  • Language
    Shell
  • License
    Apache License 2.0
  • Created over 5 years ago
  • Updated about 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Fast iterative local development and testing of Apache Airflow workflows

whirl

Fast iterative local development and testing of Apache Airflow workflows.

The idea of whirl is pretty simple: use Docker containers to start up Apache Airflow and the other components used in your workflow. This gives you a copy of your production environment that runs on your local machine. You can run your DAG locally from start to finish - with the same code as in production. Seeing your pipeline succeed gives you more confidence about the logic you are creating/refactoring and how it integrates with other components. It also gives new developers an isolated environment for experimenting with your workflows.

whirl connects the code of your DAG and your (mock) data to the Apache Airflow container that it spins up. Using volume mounts you are able to make changes to your code in your favorite IDE and immediately see the effect in the running Apache Airflow UI on your machine. This also works with custom Python modules that you are developing and using in your DAGs.

NOTE: whirl is not intended to replace proper (unit) testing of the logic you are orchestrating with Apache Airflow.

Prerequisites

whirl relies on Docker and Docker Compose. Make sure you have it installed. If using Docker for Mac or Windows ensure that you have configured it with sufficient RAM (8GB or more recommended) for running all your containers.

When you want to use whirl in your CI pipeline, you need to have jq installed. For example, with Homebrew:

brew install jq

The current implementation was developed on macOS but is intended to work with any platform supported by Docker. In our experience, Linux and macOS are fine. You can run it on native Windows 10 using WSL. Unfortunately, Docker on Windows 10 (version 1809) is hamstrung because it relies on Windows File Sharing (CIFS) to establish the volume mounts. Airflow hammers the volume a little harder than CIFS can handle, and you'll see intermittent FileNotFound errors in the volume mount. This may improve in the future. For now, running whirl inside a Linux VM in Hyper-V gives more reliable results.

Airflow Versions

As of January 2021, Whirl uses Airflow 2.x.x as the default version. A specific tag was made for Airflow 1.10.x, which can be found here

Getting Started

Development

Clone this repository:

git clone https://github.com/godatadriven/whirl.git <target directory of whirl>

For ease of use you can add the base directory to your PATH environment variable, so the command whirl is available.

export PATH=<target directory of whirl>:${PATH}

Use the release

Download the latest Whirl release artifact

Extract the file (for example into /usr/local/opt)

tar -xvzf whirl-release.tar.gz -C /usr/local/opt

Make sure the whirl script is available on your path

export PATH=/usr/local/opt/whirl:$PATH

Usage

The whirl script is used to perform all actions.

Getting usage information

$ whirl -h
$ whirl --help

Starting whirl

The default action is to start the DAG in your current directory.

With the [-x example] commandline argument you can run whirl from anywhere and tell whirl which example dag to run. The example refers to a directory with the same name in the examples directory located near the whirl script.

Whirl expects an environment to be configured. You can pass this as a command line argument [-e environment] or you can configure it as environment variable WHIRL_ENVIRONMENT in a .whirl.env file. (See Configuring environment variables.) The environment refers to a directory with the same name in the envs directory located near the whirl script.

$ whirl [-x example] [-e <environment>] [start]

Specifying the start command line argument is a more explicit way to start whirl.

Stopping whirl

$ whirl  [-x example] [-e <environment>] stop

Stops the configured environment.

If you want to stop all containers from a specific environment you can add the -e or --environment commandline argument with the name of the environment. This name corresponds with a directory in the envs directory.

Usage in a CI Pipeline

We run most of the examples from within our own CI (github actions, see for implementation details our github workflow.

You are able to run an example in ci mode on your local system by useing the whirl ci command. This will:

  • run the Docker containers daemonized in the background;
  • ensure the DAG(s) are unpaused; and
  • wait for the pipeline to either succeed or fail.

Upon success the containers will be stopped and exit successfully.

In case of failure (or success if failure is expected)we print out the logs of the failed task and cleanup before indicating the pipeline has failed.

Configuring Environment Variables

Instead of using the environment option each time you run whirl, you can also configure your environment in a .whirl.env file. This can be in three places. They are applied in order:

  • A .whirl.env file in the root of this repository. This can also specify a default environment to be used when starting whirl. You do this by setting the WHIRL_ENVIRONMENT which references a directory in the envs folder. This repository contains an example you can modify. It specifies the default PYTHON_VERSION to be used in any environment.
  • A .whirl.env file in your envs/{your-env} subdirectory. The environment directory to use can be set by any of the other .whirl.env files or specified on the commandline. This is helpful to set environment specific variables. Of course it doesn't make much sense to set the WHIRL_ENVIRONMENT here.
  • A .whirl.env in your DAG directory to override any environment variables. This can be useful for example to overwrite the (default) WHIRL_ENVIRONMENT.

Internal environment variables

Inside the whirl script the following environment variables are set:

Environment Variable Value Description
DOCKER_CONTEXT_FOLDER ${SCRIPT_DIR}/docker Base build context folder for Docker builds referenced in Docker Compose
ENVIRONMENT_FOLDER ${SCRIPT_DIR}/envs/<environment> Base folder for environment to start. Contains docker-compose.yml and environment specific preparation scripts.
DAG_FOLDER $(pwd) Current working directory. Used as Airflow DAG folder. Can contain preparation scripts to prepare for this specific DAG.
PROJECTNAME $(basename ${DAG_FOLDER})

Structure

This project is based on docker-compose and the notion of different environments where Airflow is a central part. The rest of the environment depends on the tools/setup of the production environment used in your situation.

The whirl script combines the DAG and the environment to make a fully functional setup.

To accommodate different examples:

  • The environments are split up into separate environment-specific directories inside the envs/ directory.
  • The DAGS are split into sub-directories in the examples/ directory.

Environments

Environments use Docker Compose to start containers which together mimic your production environment. The basis of the environment is the docker-compose.yml file which as a minimum declares the Airflow container to run. Extra tools (e.g. s3, sftp) can be linked together in the docker-compose file to form your specific environment.

Each environment also contains some setup code needed for Airflow to understand the environment, for example Connections and Variables. Each environment has a whirl.setup.d/ directory which is mounted in the Airflow container. On startup all scripts in this directory are executed. This is a location for installing and configuring extra client libraries that are needed to make the environment function correctly; for example awscli if S3 access is required.

DAGs

The DAGs in this project are inside the examples/ directory. In your own project you can have your code in its own location outside this repository.

Each example directory consists of at least one example DAG. Also project- specific code can be placed there. As with the environment the DAG directory can contain a whirl.setup.d/ directory which is also mounted inside the Airflow container. Upon startup all scripts in this directory are executed. The environment-specific whirl.setup.d/ is executed first, followed by the DAG one.

This is also a location for installing and configuring extra client libraries that are needed to make the DAG function correctly; for example a mock API endpoint.

Examples

This repository contains some example environments and workflows. The components used might serve as a starting point for your own environment. If you have a good example you'd like to add, please submit a merge request!

Each example contains it's own README file to explain the specifics of that example.

Generic running of examples

To run a example:

$ cd ./examples/<example-dag-directory>
# Note: here we pass the whirl environment as a command-line argument. It can also be configured with the WHIRL_ENVIRONMENT variable
$ whirl -e <environment to use>

or

$
# Note: here we pass the whirl environment as a command-line argument. It can also be configured with the WHIRL_ENVIRONMENT variable
$ whirl -x <example to run> -e <environment to use>

Open your browser to http://localhost:5000 to access the Airflow UI. Manually enable the DAG and watch the pipeline run to successful completion.

References

An early version of whirl was brought to life at ING. Bas Beelen gave a presentation describing how whirl was helpful in their infrastructure during the 2nd Apache Airflow Meetup, January 23 2019, hosted at Google London HQ.

Whirl explained at Apache Airflow Meetup

More Repositories

1

evol

a python grammar for evolutionary algorithms and heuristics
Python
185
star
2

dbt-excel

[DEPRECATED] A dbt adapter for Excel.
Python
90
star
3

iterative-broadcast-join

The iterative broadcast join example code.
Scala
69
star
4

pydantic-avro

This library can convert a pydantic class to a avro schema or generate python code from a avro schema.
Python
62
star
5

pytest-dbt-core

Pytest plugin for dbt core
Python
57
star
6

airflow-testing-examples

Python
46
star
7

rhyme-with-ai

Rhyme with AI
Python
38
star
8

jiraview

Extract data from JIRA through REST and create charts.
Python
35
star
9

scala-spark-application

Scala
32
star
10

llm-archetype-batch-use-case

General solution to archetype LLM batch use case
Python
29
star
11

pydantic-spark

Python
22
star
12

risk-analysis

Genetic algorithms and the game of Risk
Jupyter Notebook
19
star
13

piven

Prediction Intervals with specific value prediction
Python
18
star
14

build-your-own-search-engine

This repository contains code to build an MVP search engine with google like interface.
Python
16
star
15

dbt-data-ai-summit

Code that was used as an example during the Data+AI Summit 2020
15
star
16

airflow_workspace

Workspace for Airflow training, inlcuding docker and docker compose
Dockerfile
15
star
17

python-devcontainer-template

Shows you how to use a Devcontainer for your Python project 🐳♡🐍
Python
15
star
18

airflow-training-skeleton

Skeleton project for Apache Airflow training participants to work on.
Python
15
star
19

ansible_cluster

Instant Hadoop cluster with Ansible and Cobbler - Just Add Water.
Shell
13
star
20

airflow-helm

Smarty
11
star
21

doobie-monix-jdbc-example

Example project demonstrating easy, concise and typechecked JDBC access
Scala
10
star
22

ParallelConnection

Python
10
star
23

auto-tagger

Tagging texts with tags automatically
Python
9
star
24

openllm-starter

Get started with open source LLMs on a GPU
Jupyter Notebook
9
star
25

asekuro

A utility tool to automate certain tasks with Jupyter notebooks.
Python
9
star
26

databricks-cdk

Deployment of databricks resources with cdk
Python
8
star
27

c4-model-example

ASL
7
star
28

druid-ansible

Ansible scripts to create druid cluster
Python
7
star
29

godatadriven-blog

Sources to our blog
HTML
6
star
30

organization-pr-scanner

Python
6
star
31

dropblox

Drop some Blox! The one who drops in the most efficient way wins! 🏆
Jupyter Notebook
6
star
32

stackexchange-parquet

Spark job for converting the StackExchange Network data into parquet format.
Scala
5
star
33

taster-sessions

4
star
34

private-package-in-gcp-tutorial

In this tutorial, we will register a package in GCP Artifact Registry both manually as well as with CICD. In the end, you will be able to install your own private package with pip just like you're used to.
Python
4
star
35

Kedro-Azureml-Starter

Python
4
star
36

prometheus-kafka-offsets

Scala
4
star
37

datamesh

Material for the DataMesh presentation at GoDataFest 2021
Jupyter Notebook
4
star
38

feature_catalog

A package to define features and create them via a simple API
Python
4
star
39

github-contributions

Gather and analyse Github contributions with dbt-duckdb
TypeScript
4
star
40

public_cloudera_on_azure

This is needed because the Azure template cannot read from a private github
Shell
3
star
41

code_breakfast_materialize_metabase

Building a real-time analytics dashboard with Materialize and Metabase
3
star
42

flink-streaming-xke

Example how to use Flink with Kafka
Java
3
star
43

pydantic-examples

3
star
44

godatadriven-vision

Computer vision with python and OpenCV
CSS
3
star
45

mlops-workshop

How to MLOps: Experiment tracking & deployment 📊
Jupyter Notebook
3
star
46

pr-scraper

Tracks our pull requests in public repositories
Python
2
star
47

monopoly-analysis

bigger simulations = moar profit
Python
2
star
48

code-breakfast-deep-learning

Material for PyData Code Breakfast: Introduction to Deep Learning
Jupyter Notebook
2
star
49

provision-nifi-hdinsight

Scripts to provision NiFi to HDInsight
Shell
2
star
50

os-training-materials

A selection of notebooks coming from the GoDataDriven trainings
HTML
2
star
51

dbt-bi-exposures

Python package that collects dbt exposure metadata from different BI providers such as Power BI, Tableau, Looker, Metabase, etc.
Python
2
star
52

duck-pond

A lightweight data lake using dbt-duckdb
2
star
53

ddsw-2018-dsp-workshop

Jupyter Notebook
2
star
54

azureml_experiment_tracking_tutorial

Python
2
star
55

hive-summary-loader

Scripts that load data into hive and create a summary for it
Python
2
star
56

clubcloud_dbt_soda

Repository for Club Cloud workshop on dbt + SodaSQL
Python
2
star
57

hadoop-ds-workshop

Hadoop Data Science workshop
Python
1
star
58

azure_function_python_remote_build

Example repository to show how a minimal Python application can be built and deployed remotely as Azure Function.
HCL
1
star
59

azure_function_terraform

HCL
1
star
60

data-centric-ai-hackathon

Jupyter Notebook
1
star
61

mac-install

Setting up your new Macbook with an install script.
Shell
1
star
62

airflow-aks-dags

Python
1
star
63

hdp-smokey

Hadoop smoke testing framework
Python
1
star
64

tmnl-spark-graphs-training

Python
1
star
65

code_breakfast_tutorial

HTML
1
star
66

bandit-friday

February 2021 GDD Friday project of Roel, Rogier and Vadim
Jupyter Notebook
1
star
67

balancing-heroes-and-pokemon

Balancing Heroes and Pokemon in Real Time: A Streaming Variant of Trueskill for Online Ranking
Scala
1
star
68

academy-git-fundamentals

1
star