• Stars
    star
    7,854
  • Rank 4,803 (Top 0.1 %)
  • Language
    Python
  • License
    Apache License 2.0
  • Created over 2 years ago
  • Updated 28 days ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

๐Ÿง™ Build, run, and manage data pipelines for integrating and transforming data.

Mage

๐Ÿง™ A modern replacement for Airflow.

Documentationย ย ย ๐ŸŒช๏ธย ย ย  Watch 2 min demoย ย ย ๐ŸŒŠย ย ย  Play with live toolย ย ย ๐Ÿ”ฅย ย ย  Get instant help

Give your data team magical powers

Integrate and synchronize data from 3rd party sources

Build real-time and batch pipelines to transform data using Python, SQL, and R

Run, monitor, and orchestrate thousands of pipelines without losing sleep


1๏ธโƒฃ ๐Ÿ—๏ธ

Build

Have you met anyone who said they loved developing in Airflow?
Thatโ€™s why we designed an easy developer experience that youโ€™ll enjoy.

Easy developer experience
Start developing locally with a single command or launch a dev environment in your cloud using Terraform.

Language of choice
Write code in Python, SQL, or R in the same data pipeline for ultimate flexibility.

Engineering best practices built-in
Each step in your pipeline is a standalone file containing modular code thatโ€™s reusable and testable with data validations. No more DAGs with spaghetti code.

โ†“

2๏ธโƒฃ ๐Ÿ”ฎ

Preview

Stop wasting time waiting around for your DAGs to finish testing.
Get instant feedback from your code each time you run it.

Interactive code
Immediately see results from your codeโ€™s output with an interactive notebook UI.

Data is a first-class citizen
Each block of code in your pipeline produces data that can be versioned, partitioned, and cataloged for future use.

Collaborate on cloud
Develop collaboratively on cloud resources, version control with Git, and test pipelines without waiting for an available shared staging environment.

โ†“

3๏ธโƒฃ ๐Ÿš€

Launch

Donโ€™t have a large team dedicated to Airflow?
Mage makes it easy for a single developer or small team to scale up and manage thousands of pipelines.

Fast deploy
Deploy Mage to AWS, GCP, or Azure with only 2 commands using maintained Terraform templates.

Scaling made simple
Transform very large datasets directly in your data warehouse or through a native integration with Spark.

Observability
Operationalize your pipelines with built-in monitoring, alerting, and observability through an intuitive UI.

๐Ÿง™ Intro

Mage is an open-source data pipeline tool for transforming and integrating data.

  1. Quick start
  2. Demo
  3. Tutorials
  4. Documentation
  5. Features
  6. Core design principles
  7. Core abstractions
  8. Contributing

๐Ÿƒโ€โ™€๏ธ Quick start

You can install and run Mage using Docker (recommended), pip, or conda.

Install using Docker

  1. Create a new project and launch tool (change demo_project to any other name if you want):

    docker run -it -p 6789:6789 -v $(pwd):/home/src mageai/mageai \
      /app/run_app.sh mage start demo_project
    • If you want to run Mage locally on a different port, change the first port after -p in the command above. For example, to change the port to 6790, run:
    docker run -it -p 6790:6789 -v $(pwd):/home/src mageai/mageai \
      /app/run_app.sh mage start demo_project

    Want to use Spark or other integrations? Read more about integrations.

  2. Open http://localhost:6789 in your browser and build a pipeline.

  • If you changed the Docker port for running Mage locally, go to the url http://127.0.0.1:[port] (e.g. http://127.0.0.1:6790) in your browser to view the pipelines dashboard.

Using pip or conda

  1. Install Mage

    (a) To the current virtual environment:

    pip install mage-ai

    or

    conda install -c conda-forge mage-ai

    (b) To a new virtual environment (e.g., myenv):

    python3 -m venv myenv
    source myenv/bin/activate
    pip install mage-ai

    or

    conda create -n myenv -c conda-forge mage-ai
    conda activate myenv

    For additional packages (e.g. spark, postgres, etc), please see Installing extra packages.

    If you run into errors, please see Install errors.

  2. Create new project and launch tool (change demo_project to any other name if you want):

    mage start demo_project
  3. Open http://localhost:6789 in your browser and build a pipeline.


๐ŸŽฎ Demo

Live demo

Build and run a data pipeline with our demo app.

WARNING

The live demo is public to everyone, please donโ€™t save anything sensitive (e.g. passwords, secrets, etc).

Demo video (2 min)

Mage quick start demo

Click the image to play video


๐Ÿ‘ฉโ€๐Ÿซ Tutorials

Fire mage


๐Ÿ”ฎ Features

๐ŸŽถ Orchestration Schedule and manage data pipelines with observability.
๐Ÿ““ Notebook Interactive Python, SQL, & R editor for coding data pipelines.
๐Ÿ—๏ธ Data integrations Synchronize data from 3rd party sources to your internal destinations.
๐Ÿšฐ Streaming pipelines Ingest and transform real-time data.
โŽ DBT Build, run, and manage your DBT models with Mage.

A sample data pipeline defined across 3 files โž

  1. Load data โž
    @data_loader
    def load_csv_from_file():
        return pd.read_csv('default_repo/titanic.csv')
  2. Transform data โž
    @transformer
    def select_columns_from_df(df, *args):
        return df[['Age', 'Fare', 'Survived']]
  3. Export data โž
    @data_exporter
    def export_titanic_data_to_disk(df) -> None:
        df.to_csv('default_repo/titanic_transformed.csv')

What the data pipeline looks like in the UI โž

data pipeline overview

New? We recommend reading about blocks and learning from a hands-on tutorial.

Ask us questions on Slack


๐Ÿ”๏ธ Core design principles

Every user experience and technical design decision adheres to these principles.

๐Ÿ’ป Easy developer experience Open-source engine that comes with a custom notebook UI for building data pipelines.
๐Ÿšข Engineering best practices built-in Build and deploy data pipelines using modular code. No more writing throwaway code or trying to turn notebooks into scripts.
๐Ÿ’ณ Data is a first-class citizen Designed from the ground up specifically for running data-intensive workflows.
๐Ÿช Scaling is made simple Analyze and process large data quickly for rapid iteration.

๐Ÿ›ธ Core abstractions

These are the fundamental concepts that Mage uses to operate.

Project Like a repository on GitHub; this is where you write all your code.
Pipeline Contains references to all the blocks of code you want to run, charts for visualizing data, and organizes the dependency between each block of code.
Block A file with code that can be executed independently or within a pipeline.
Data product Every block produces data after it's been executed. These are called data products in Mage.
Trigger A set of instructions that determine when or how a pipeline should run.
Run Stores information about when it was started, its status, when it was completed, any runtime variables used in the execution of the pipeline or block, etc.

๐Ÿ™‹โ€โ™€๏ธ Contributing and developing

Add features and instantly improve the experience for everyone.

Check out the contributing guide to setup your development environment and start building.


๐Ÿ‘จโ€๐Ÿ‘ฉโ€๐Ÿ‘งโ€๐Ÿ‘ฆ Community

Individually, weโ€™re a mage.

๐Ÿง™ Mage

Magic is indistinguishable from advanced technology. A mage is someone who uses magic (aka advanced technology). Together, weโ€™re Magers!

๐Ÿง™โ€โ™‚๏ธ๐Ÿง™ Magers (/หˆmฤjษ™r/)

A group of mages who help each other realize their full potential! Letโ€™s hang out and chat together โž

Hang out on Slack

For real-time news, fun memes, data engineering topics, and more, join us on โž

Twitter Twitter
LinkedIn LinkedIn
GitHub GitHub
Slack Slack

๐Ÿค” Frequently Asked Questions (FAQs)

Check out our FAQ page to find answers to some of our most asked questions.


๐Ÿชช License

See the LICENSE file for licensing information.

Water mage casting spell


More Repositories

1

mage-zoomcamp

This repository will contain all of the resources for the Mage component of the Data Engineering Zoomcamp: https://github.com/DataTalksClub/data-engineering-zoomcamp/tree/main
Dockerfile
93
star
2

mage-ai-terraform-templates

Terraform templates for deploying mage-ai to AWS, GCP and Azure
HCL
39
star
3

mlops

Python
18
star
4

compose-quickstart

A quickstart repo for Mage using Docker compose.
Dockerfile
15
star
5

machine_learning

The definitive end-to-end machine learning (ML lifecycle) guide and tutorial for data engineers.
Python
15
star
6

magic-devcontainer

A demo instance of mage for pulling sample data from a public Google pub/sub topic and transforming with dbt.
Python
11
star
7

helm-charts

Smarty
10
star
8

dbt-quickstart

Python
8
star
9

docker

Dockerfile and Docker compose templates
Dockerfile
8
star
10

assets

Media assets used in repository documentation.
5
star
11

rag-project

Python
4
star
12

llm_orchestration

Python
3
star
13

demo_etl_pipeline

Demo pipeline for loading, transforming, and exporting restaurant data.
Python
2
star
14

datasets

Datasets to play with.
2
star
15

platform_template

Mage project platform template for using multiple projects and other non-Mage projects in 1 Mage ultra project.
Python
2
star
16

.github

1
star
17

mage-libpostal-docker

Sample Mage docker compose for building a docker image with libpostal
Python
1
star
18

etl-demo

Mage ELT demo for pulling data from an API, performing transformations, and writing to a local DuckDB database.
Python
1
star
19

mage_demo_project

Demo project containing data integration pipelines and batch transformation pipelines.
Python
1
star