• Stars
    star
    174
  • Rank 219,104 (Top 5 %)
  • Language
    Python
  • License
    MIT License
  • Created about 6 years ago
  • Updated 5 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

๐ŸŽก Automated build repo for Python wheels and source packages

wheelwright

This repo builds release wheels and source packages for Python libraries available as GitHub repositories. We're currently using it to build wheels for spaCy and our other libraries. The build repository integrates with Azure Pipelines and builds the artifacts for macOS, Linux and Windows on Python 3.5+. All wheels are available in the releases.

๐Ÿ™ Special thanks to Nathaniel J. Smith for helping us out with this, to Matthew Brett for multibuild, and of course to the PyPA team for their hard work on Python packaging.

โš ๏ธ This repo has been updated to use Azure Pipelines instead of Travis and Appveyor (see the v1 branch for the old version). We also dropped support for Python 2.7. The code is still experimental and currently mostly intended to build wheels for our projects. For more details on how it works, check out the FAQ below.

Azure Pipelines

๐ŸŽก Usage

Quickstart

  1. Fork or clone this repo and run pip install -r requirements.txt to install its requirements.
  2. Generate a personal GitHub token with access to the repo, user and admin:repo_hook scopes and put it in a file github-secret-token.txt in the root of the repo. Commit the changes. Don't worry, the secrets file is excluded in the .gitignore.
  3. Set up a GitHub service connection on Azure Pipelines with a personal access token and name it wheelwright. This will be used to upload the artifacts to the GitHub release.
  4. Run python run.py build your-org/your-repo [commit/tag].
  5. Once the build is complete, the artifacts will show up in the GitHub release wheelwright created for the build. They'll also be available as release artifacts in Azure Pipelines, so you can add a release process that uploads them to PyPi.

Package requirements

Wheelwright currently makes the following assumptions about the packages you're building and their repos:

  • The repo includes a requirements.txt that lists all dependencies for building and testing.
  • The project uses pytest for testing and tests are shipped inside the main package so they can be run from an installed wheel.
  • The setup.py takes care of the whole setup and no other steps are required: setup.py sdist builds the sdist and setup.py bdist_wheel builds the wheel.

Setup and Installation

Make a local clone of this repo:

git clone https://github.com/explosion/wheelwright

Next, install its requirements (ideally in a virtual environment):

pip install -r requirements.txt

Click here to generate a personal GitHub token. Give it some memorable description, and check the box to give it the "repo" scope. This will give you some gibberish like f7d4d475c85ba2ae9557391279d1fc2368f95c38. Next go into your wheelwright checkout, and create a file called github-secret-token.txt and write the gibberish into this file.

Don't worry, github-secret-token.txt is listed in .gitignore, so it's difficult to accidentally commit it. Instead of adding the file, you can also provide the token via the GITHUB_SECRET_TOKEN environment variable.

Security notes

  • Be careful with this gibberish; anyone who gets it can impersonate you to GitHub.

  • If you're ever worried that your token has been compromised, you can delete it here, and then generate a new one.

  • This token is only used to access the wheelwright repository, so if you want to be extra-careful you could create a new GitHub user, grant them access to this repo only, and then use a token generated with that user's account.

Building wheels

Note that the run.py script requires Python 3.6+. If you want to build wheels for the v1.31.2 tag inside the explosion/cymem repository, then run:

cd wheelwright
python run.py build explosion/cymem v1.31.2

Eventually, if everything goes well, you'll end up with wheels attached to a new GitHub release and in Azure Pipelines. You can then either publish them via a custom release process, or download them manually:

python run.py download cymem-v1.31.2

In Azure Pipelines, the artifacts are available via the "Artifacts" button. You can also set up a release pipeline with twine authentication, so you can publish your package to PyPi in one click. Also see this blog post for an example.

๐ŸŽ› API

command run.py build

Build wheels for a given repo and commit / tag.

python run.py build explosion/cymem v1.32.1
Argument Type Description
repo positional The repository to build, in user/repo format.
commit positional The commit to build.
--package-name option Optional alternative Python package name, if different from repo name.
--universal flag Build sdist and universal wheels (pure Python with no compiled extensions). If enabled, no platform-specific wheels will be built.
--llvm flag Build requires LLVM to be installed, which will trigger additional step in Windows build pipeline.

command run.py download

Download existing wheels for a release ID (name of build repo tag). The downloaded wheels will be placed in a directory wheels.

python run.py download cymem-v1.31.2
Argument Type Description
release-id positional Name of the release to download.

Environment variables

Name Description Default
WHEELWRIGHT_ROOT Root directory of the build repo. Same directory as run.py.
WHEELWRIGHT_WHEELS_DIR Directory for downloaded wheels. /wheels in root directory.
WHEELWRIGHT_REPO Build repository in user/repo format. Automatically read from git config.
GITHUB_SECRET_TOKEN Personal GitHub access token, if not provided via github-secret-token.txt. -

โ‰๏ธ FAQ

What does this actually do?

The build command uses the GitHub API to create a GitHub release in this repo, called something like cymem-v1.31.2. Don't be confused: this is not a real release! We're just abusing GitHub releases to have a temporary place to collect the wheel files as we build them. Then it creates a new branch of this repo, and in the branch it creates a file called build-spec.json describing which project and commit you want to build.

When Azure Pipelines sees this branch, it springs into action, and starts build jobs running on a variety of architectures and Python versions. These build jobs read the build-spec.json file, and then check out the specified project/revision, build it, test it, and finally attach the resulting wheel to the GitHub release we created earlier.

What if something goes wrong?

If the build fails, you'll see the failures in the Azure Pipelines build logs. All artifacts that have completed will still be available to download from the GitHub release.

If you resubmit a build, then run.py will notice and give it a unique build ID โ€“ so if you run run.py build explosion/cymem v1.31.2 twice, the first time it'll use the id cymem-v1.31.2, and the second time it will be cymem-v1.31.2-2, etc. This doesn't affect the generated wheels in any way; it's just to make sure we don't get mixed up between the two builds.

As a package maintainer, what do I need to know about the build process?

Essentially we run:

# Setup
git clone https://github.com/USER-NAME/PROJECT-NAME.git checkout
cd checkout
git checkout REVISION

# Build
cd checkout
pip install -Ur requirements.txt
python setup.py bdist_wheel

# Test
cd empty-directory
pip install -Ur ../checkout/requirements.txt
pip install THE-BUILT-WHEEL
pytest --pyargs PROJECT-NAME

Some things to note:

The build/test phases currently have varying levels of isolation from each other:

  • On Windows and macOS / OSX, they use the same Python environment.
  • On Linux, they run in different docker containers, which are running different Linux distros, to make sure the binaries really are portable.

We use the same requirements.txt for both building and testing. You could imagine splitting those into two separate files, in order to make sure that dependency resolution is working, that we don't have any run-time dependency on Cython, etc., but currently we don't.

We assume that projects use pytest for testing, and that they ship their tests inside their main package, so that you can run the tests directly from an installed wheel without access to a source checkout.

For simplicity, we assume that the repository name (in the clone URL) is the same as the Python import name (in the pytest command). You can override this on a case-by-case basis passing --package ... to the build command, but of course doing this every time is going to be annoying.

Aside from modifying setup.py, there isn't currently any way for a specific project to further customize the build, e.g. if they need to build some dependency like libblis that's not available on PyPI.

What do I need to know to maintain this repo itself?

Internally, this builds on Matthew Brett's multibuild project. A snapshot of multibuild is included as a git submodule, in the multibuild/ directory. You might want to update that submodule occasionally to pull in new multibuild fixes:

cd multibuild
git pull
cd ..
git commit -am "Updated multibuild snapshot"

Multibuild was originally designed to do Linux and macOS builds, and with the idea that you'd create a separate repo for each project with custom configuration. We kluge it into working for us by reading configuration out of the build-spec.json file and using it to configure various settings. Most of the actual configuration is in the azure-pipelines.yml file.

I'm not Explosion, but I want to use this too!

It's all under the MIT license, so feel free! It would be great to somehow convert this into a generic reusable piece of infrastructure, though it's not entirely clear how given how Rube-Goldergian the whole thing is โ€“ you can't just slap it up on PyPI. (Maybe a cookiecutter template that generates a repo like this?)

More Repositories

1

spaCy

๐Ÿ’ซ Industrial-strength Natural Language Processing (NLP) in Python
Python
29,546
star
2

thinc

๐Ÿ”ฎ A refreshing functional take on deep learning, compatible with your favorite libraries
Python
2,813
star
3

spacy-course

๐Ÿ‘ฉโ€๐Ÿซ Advanced NLP with spaCy: A free online course
Python
2,299
star
4

sense2vec

๐Ÿฆ† Contextually-keyed word vectors
Python
1,615
star
5

spacy-models

๐Ÿ’ซ Models for the spaCy Natural Language Processing (NLP) library
Python
1,589
star
6

spacy-transformers

๐Ÿ›ธ Use pretrained transformers like BERT, XLNet and GPT-2 in spaCy
Python
1,334
star
7

projects

๐Ÿช End-to-end NLP workflows from prototype to production
Python
1,285
star
8

spacy-llm

๐Ÿฆ™ Integrating LLMs into structured NLP pipelines
Python
1,049
star
9

curated-transformers

๐Ÿค– A PyTorch library of curated Transformer models and their composable components
Python
858
star
10

spacy-streamlit

๐Ÿ‘‘ spaCy building blocks and visualizers for Streamlit apps
Python
787
star
11

spacy-stanza

๐Ÿ’ฅ Use the latest Stanza (StanfordNLP) research models directly in spaCy
Python
722
star
12

prodigy-recipes

๐Ÿณ Recipes for the Prodigy, our fully scriptable annotation tool
Jupyter Notebook
477
star
13

wasabi

๐Ÿฃ A lightweight console printing and formatting toolkit
Python
444
star
14

cymem

๐Ÿ’ฅ Cython memory pool for RAII-style memory management
Cython
436
star
15

srsly

๐Ÿฆ‰ Modern high-performance serialization utilities for Python (JSON, MessagePack, Pickle)
Python
422
star
16

displacy

๐Ÿ’ฅ displaCy.js: An open-source NLP visualiser for the modern web
JavaScript
343
star
17

lightnet

๐ŸŒ“ Bringing pjreddie's DarkNet out of the shadows #yolo
C
319
star
18

prodigy-openai-recipes

โœจ Bootstrap annotation with zero- & few-shot learning via OpenAI GPT-3
Python
318
star
19

spacy-notebooks

๐Ÿ’ซ Jupyter notebooks for spaCy examples and tutorials
Jupyter Notebook
285
star
20

spacy-services

๐Ÿ’ซ REST microservices for various spaCy-related tasks
Python
240
star
21

cython-blis

๐Ÿ’ฅ Fast matrix-multiplication as a self-contained Python library โ€“ no system dependencies!
C
215
star
22

displacy-ent

๐Ÿ’ฅ displaCy-ent.js: An open-source named entity visualiser for the modern web
CSS
197
star
23

jupyterlab-prodigy

๐Ÿงฌ A JupyterLab extension for annotating data with Prodigy
TypeScript
188
star
24

spacymoji

๐Ÿ’™ Emoji handling and meta data for spaCy with custom extension attributes
Python
180
star
25

tokenizations

Robust and Fast tokenizations alignment library for Rust and Python https://tamuhey.github.io/tokenizations/
Rust
180
star
26

catalogue

Super lightweight function registries for your library
Python
171
star
27

confection

๐Ÿฌ Confection: the sweetest config system for Python
Python
169
star
28

spacy-dev-resources

๐Ÿ’ซ Scripts, tools and resources for developing spaCy
Python
125
star
29

radicli

๐Ÿ•Š๏ธ Radically lightweight command-line interfaces
Python
100
star
30

spacy-lookups-data

๐Ÿ“‚ Additional lookup tables and data resources for spaCy
Python
98
star
31

spacy-experimental

๐Ÿงช Cutting-edge experimental spaCy components and features
Python
94
star
32

talks

๐Ÿ’ฅ Browser-based slides or PDFs of our talks and presentations
JavaScript
94
star
33

thinc-apple-ops

๐Ÿ Make Thinc faster on macOS by calling into Apple's native Accelerate library
Cython
90
star
34

healthsea

Healthsea is a spaCy pipeline for analyzing user reviews of supplementary products for their effects on health.
Python
87
star
35

preshed

๐Ÿ’ฅ Cython hash tables that assume keys are pre-hashed
Cython
82
star
36

weasel

๐Ÿฆฆ weasel: A small and easy workflow system
Python
62
star
37

spacy-huggingface-pipelines

๐Ÿ’ฅ Use Hugging Face text and token classification pipelines directly in spaCy
Python
61
star
38

spacy-ray

โ˜„๏ธ Parallel and distributed training with spaCy and Ray
Python
54
star
39

ml-datasets

๐ŸŒŠ Machine learning dataset loaders for testing and example scripts
Python
45
star
40

murmurhash

๐Ÿ’ฅ Cython bindings for MurmurHash2
C++
44
star
41

assets

๐Ÿ’ฅ Explosion Assets
43
star
42

spacy-huggingface-hub

๐Ÿค— Push your spaCy pipelines to the Hugging Face Hub
Python
42
star
43

wikid

Generate a SQLite database from Wikipedia & Wikidata dumps.
Python
30
star
44

vscode-prodigy

๐Ÿงฌ A VS Code extension for annotating data with Prodigy
TypeScript
30
star
45

spacy-alignments

๐Ÿ’ซ A spaCy package for Yohei Tamura's Rust tokenizations library
Python
26
star
46

spacy-vscode

spaCy extension for Visual Studio Code
Python
24
star
47

spacy-curated-transformers

spaCy entry points for Curated Transformers
Python
22
star
48

spacy-benchmarks

๐Ÿ’ซ Runtime performance comparison of spaCy against other NLP libraries
Python
20
star
49

prodigy-hf

Train huggingface models on top of Prodigy annotations
Python
19
star
50

prodigy-pdf

A Prodigy plugin for PDF annotation
Python
18
star
51

spacy-vectors-builder

๐ŸŒธ Train floret vectors
Python
17
star
52

os-signpost

Wrapper for the macOS signpost API
Cython
12
star
53

spacy-loggers

๐Ÿ“Ÿ Logging utilities for spaCy
Python
12
star
54

prodigy-evaluate

๐Ÿ”Ž A Prodigy plugin for evaluating spaCy pipelines
Python
12
star
55

prodigy-segment

Select pixels in Prodigy via Facebook's Segment-Anything model.
Python
11
star
56

curated-tokenizers

Lightweight piece tokenization library
Cython
11
star
57

conll-2012

A slightly cleaned up version of the scripts & data for the CoNLL 2012 Coreference task.
Python
10
star
58

thinc_gpu_ops

๐Ÿ”ฎ GPU kernels for Thinc
C++
9
star
59

prodigy-ann

A Prodigy pluging for ANN techniques
Python
4
star
60

prodigy-whisper

Audio transcription with OpenAI's whisper model in the loop.
Python
4
star
61

princetondh

Code for our presentation in Princeton DH 2023 April.
Jupyter Notebook
4
star
62

spacy-legacy

๐Ÿ•ธ๏ธ Legacy architectures and other registered spaCy v3.x functions for backwards-compatibility
Python
4
star
63

ec2buildwheel

Python
2
star
64

aiGrunn-2023

Materials for the aiGrunn 2023 talk on spaCy Transformer pipelines
Python
1
star
65

spacy-io-binder

๐Ÿ“’ Repository used to build Binder images for the interactive spaCy code examples
Jupyter Notebook
1
star
66

prodigy-lunr

A Prodigy plugin for document search via LUNR
Python
1
star
67

.github

:octocat: GitHub settings
1
star
68

span-labeling-datasets

Loaders for various span labeling datasets
Python
1
star
69

spacy-biaffine-parser

Python
1
star