• Stars
    star
    721
  • Rank 62,814 (Top 2 %)
  • Language
    Python
  • License
    GNU General Publi...
  • Created over 4 years ago
  • Updated over 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

🐳 The stupidly simple CLI workspace for your data warehouse.

Whale is actively being built and maintained by hyperquery. For our full data workspace for teams, check out hyperquery.

The simplest way to find tables, write queries, and take notes

whale is a lightweight, CLI-first SQL workspace for your data warehouse.

  • Execute SQL in .sql files using wh run, or in sql blocks within .md files using the --!wh-run flag and wh run.
  • Automatically index all of the tables in your warehouse as plain markdown files -- so they're easily versionable, searchable, and editable either locally or through a remote git server.
  • Search for tables and documentation.
  • Define and schedule basic metric calculations (in beta).

😁 Join the discussion on slack.


codecov slack

For a demo of a git-backed workflow, check out dataframehq/whale-bigquery-public-data.

πŸ“” Documentation

Read the docs for a full overview of whale's capabilities.

Installation

Mac OS

brew install dataframehq/tap/whale

All others

Make sure rust is installed on your local system. Then, clone this directory and run the following in the base directory of the repo:

make && make install

If you are running this multiple times, make sure ~/.whale/libexec does not exist, or your virtual environment may not rebuild. We don't explicitly add an alias for the whale binary, so you'll want to add the following alias to your .bash_profile or .zshrc file.

alias wh=~/.whale/bin/whale

Getting started

Setup

For individual use, run the following command to go through the onboarding process. It will (a) set up all necessary files in ~/.whale, (b) walk you through cron job scheduling to periodically scrape metadata, and (c) set up a warehouse:

wh init

The cron job will run as you schedule it (by default, every 6 hours). If you're feeling impatient, you can also manually run wh etl to pull down the latest data from your warehouse.

For team use, see the docs for instructions on how to set up and point your whale installation at a remote git server.

Seeding some sample data

If you just want to get a feel for how whale works, remove the ~/.whale directory and follow the instructions at dataframehq/whale-bigquery-public-data.

Go go go!

Run:

wh

to search over all metadata. Hitting enter will open the editable part of the docs in your default text editor, defined by the environmental variable $EDITOR (if no value is specified, whale will use the command open).

To execute .sql files, run:

wh run your_query.sql

To execute markdown files, you'll need to write the query in a ```sql block, then place a --!wh-run on its own line. Upon execution of the markdown file, any sql blocks with this comment will execute the query and replace the `--!wh-run` line with the result set. To run the markdown file, run:

wh run your_markdown_file.md

A common pattern is to set up a shortcut in your IDE to execute wh run % for a smooth editing + execution workflow. For an example of how to do this in vim, see the docs here. This is one of the most powerful features of whale, enabling you to take notes and write executable queries seamlessly side-by-side.