• Stars
    star
    2,299
  • Rank 20,035 (Top 0.4 %)
  • Language
    Python
  • License
    MIT License
  • Created over 5 years ago
  • Updated about 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

๐Ÿ‘ฉโ€๐Ÿซ Advanced NLP with spaCy: A free online course

Advanced NLP with spaCy: A free online course

This repo contains both an online course, as well as its modern open-source web framework. In the course, you'll learn how to use spaCy to build advanced natural language understanding systems, using both rule-based and machine learning approaches. The front-end is powered by Gatsby, Reveal.js and Plyr, and the back-end code execution uses Binder ๐Ÿ’– It's all open-source and published under the MIT license (code and framework) and CC BY-NC (spaCy course materials).

This course is mostly intended for self-study. Yes, you can cheat โ€“ the solutions are all in this repo, there's no penalty for clicking "Show hints" or "Show solution", and you can mark an exercise as done when you think it's done.

Azure Pipelines Netlify Status Binder

๐Ÿ’ฌ Languages and Translations

Language Text Examples1 Source Authors
English English chapters/en, exercises/en @ines
German German chapters/de, exercises/de @ines, @Jette16
Spanish Spanish chapters/es, exercises/es @mariacamilagl, @damian-romero
French French chapters/fr, exercises/fr @datakime
Japanese Japanese chapters/ja, exercises/ja @tamuhey, @hiroshi-matsuda-rit, @icoxfog417, @akirakubo, @forest1988, @ao9mame, @matsurih, @HiromuHota, @mei28, @polm
Chinese Chinese chapters/zh, exercises/zh @crownpku
Portuguese English chapters/pt, exercises/pt @Cristianasp

If you spot a mistake, I always appreciate pull requests!

1. This is the language used for the text examples and resources used in the exercises. For example, the German version of the course also uses German text examples and models. It's not always possible to translate all code examples, so some translations may still use and analyze English text as part of the course.

Related resources

๐Ÿ’ FAQ

Is this related to the spaCy course on DataCamp?

I originally developed the content for DataCamp, but I wanted to make a free version to make it available to more people, and so you don't have to sign up for their service. As a weekend project, I ended up putting together my own little app to present the exercises and content in a fun and interactive way.

Can I use this to build my own course?

Probably, yes! If you've been looking for a DIY way to publish your materials, I hope that my little framework can be useful. Because so many people expressed interest in this, I put together some starter repos that you can fork and adapt:

Why the different licenses?

The source of the app, UI components and Gatsby framework for building interactive courses is licensed as MIT, like pretty much all of my open-source software. The course materials themselves (slides and chapters), are licensed under CC BY-NC. This means that you can use them freely โ€“ you just can't make money off them.

I want to help translate this course into my language. How can I contribute?

First, thanks so much, this is really cool and valuable to the community ๐Ÿ™Œ I've tried to set up the course structure so it's easy to add different languages: language-specific files are organized into directories in exercises and chapters, and other language specific texts are available in locale.json. If you want to contribute, there are two different ways to get involved:

  1. Start a community translation project. This is the easiest, no-strings-attached way. You can fork the repo, copy-paste the English version, change the language code, start translating and invite others to contribute (if you like). If you're looking for contributors, feel free to open an issue here or tag @spacy_io on Twitter so we can help get the word out. We're also happy to answer your questions on the issue tracker.

  2. Make us an offer. We're open to commissioning translations for different languages, so if you're interested, email us at [email protected] and include your offer, estimated time schedule and a bit about you and your background (and any technical writing or translation work you've done in the past, if available). It doesn't matter where you're based, but you should be able to issue invoices as a freelancer or similar, depending on your country.

I want to help create an audio/video tutorial for an existing translation. How can I get involved?

Again, thanks, this is super cool! While the English and German videos also include a video recording, it's not a requirement and we'd be happy to just provide an audio track alongside the slides. We'd take care of the postprocessing and video editing, so all we need is the audio recording. If you feel comfortable recording yourself reading out the slide notes in your language, email us at [email protected] and make us an offer and include a bit about you and similar work you've done in the past, if available.

๐ŸŽ› Usage & API

Running the app

To start the local development server, install Gatsby and then all other dependencies, then use npm run dev to start the development server. Make sure you have at least Node 10.15 installed.

npm install -g gatsby-cli  # Install Gatsby globally
npm install                # Install dependencies
npm run dev                # Run the development server

If running with docker just run make build and then make gatsby-dev

How it works

When building the site, Gatsby will look for .py files and make their contents available to query via GraphQL. This lets us use the raw code within the app. Under the hood, the app uses Binder to serve up an image with the package dependencies, including the spaCy models. By calling into JupyterLab, we can then execute code using the active kernel. This lets you edit the code in the browser and see the live results. Also see my juniper repo for more details on the implementation.

To validate the code when the user hits "Submit", I'm currently using a slightly hacky trick. Since the Python code is sent back to the kernel as a string, we can manipulate it and add tests โ€“ for example, exercise exc_01_02_01.py will be validated using test_01_02_01.py (if available). The user code and test are combined using a string template. At the moment, the testTemplate in the meta.json looks like this:

from wasabi import msg
__msg__ = msg
__solution__ = """${solution}"""
${solution}

${test}
try:
    test()
except AssertionError as e:
    __msg__.fail(e)

If present, ${solution} will be replaced with the string value of the submitted user code. In this case, we're inserting it twice: once as a string so we can check whether the submission includes something, and once as the code, so we can actually run it and check the objects it creates. ${test} is replaced by the contents of the test file. I'm also making wasabi's printer available as __msg__, so we can easily print pretty messages in the tests. Finally, the try/accept block checks if the test function raises an AssertionError and if so, displays the error message. This also hides the full error traceback (which can easily leak the correct answers).

A test file could then look like this:

def test():
    assert "spacy.load" in __solution__, "Are you calling spacy.load?"
    assert nlp.meta["lang"] == "en", "Are you loading the correct model?"
    assert nlp.meta["name"] == "core_web_sm", "Are you loading the correct model?"
    assert "nlp(text)" in __solution__, "Are you processing the text correctly?"
    assert "print(doc.text)" in __solution__, "Are you printing the Doc's text?"

    __msg__.good(
        "Well done! Now that you've practiced loading models, let's look at "
        "some of their predictions."
    )

With this approach, it's not always possible to validate the input perfectly โ€“ there are too many options and we want to avoid false positives.

Running automated tests

The automated tests make sure that the provided solution code is compatible with the test file that's used to validate submissions. The test suite is powered by the pytest framework and runnable test files are generated automatically in a directory __tests__ before the test session starts. See the conftest.py for implementation details.

# Install requirements
pip install -r binder/requirements.txt
# Run the tests (will generate the files automatically)
python -m pytest __tests__

If running with docker just run make build and then make pytest

Directory Structure

โ”œโ”€โ”€ binder
|   โ””โ”€โ”€ requirements.txt  # Python dependency requirements for Binder
โ”œโ”€โ”€ chapters              # chapters, grouped by language
|   โ”œโ”€โ”€ en                # English chapters, one Markdown file per language
|   |   โ””โ”€โ”€ slides        # English slides, one Markdown file per presentation
|   โ””โ”€โ”€ ...               # other languages
โ”œโ”€โ”€ exercises             # code files, tests and assets for exercises
|   โ”œโ”€โ”€ en                # English exercises, solutions, tests and data
|   โ””โ”€โ”€ ...               # other languages
โ”œโ”€โ”€ public                # compiled site
โ”œโ”€โ”€ src                   # Gatsby/React source, independent from content
โ”œโ”€โ”€ static                # static assets like images, available in slides/chapters
โ”œโ”€โ”€ locale.json           # translations of meta and UI text
โ”œโ”€โ”€ meta.json             # course metadata
โ””โ”€โ”€ theme.sass            # UI theme colors and settings

Setting up Binder

The requirements.txt in the repository defines the packages that are installed when building it with Binder. For this course, I'm using the source repo as the Binder repo, as it allows to keep everything in one place. It also lets the exercises reference and load other files (e.g. JSON), which will be copied over into the Python environment. I build the binder from a branch binder, though, which I only update if Binder-relevant files change. Otherwise, every update to master would trigger an image rebuild.

You can specify the binder settings like repo, branch and kernel type in the "juniper" section of the meta.json. I'd recommend running the very first build via the interface on the Binder website, as this gives you a detailed build log and feedback on whether everything worked as expected. Enter your repository URL, click "launch" and wait for it to install the dependencies and build the image.

Binder

File formats

Chapters

Chapters are placed in /chapters and are Markdown files consisting of <exercise> components. They'll be turned into pages, e.g. /chapter1. In their frontmatter block at the top of the file, they need to specify type: chapter, as well as the following meta:

---
title: The chapter title
description: The chapter description
prev: /chapter1 # exact path to previous chapter or null to not show a link
next: /chapter3 # exact path to next chapter or null to not show a link
id: 2 # unique identifier for chapter
type: chapter # important: this creates a standalone page from the chapter
---

Slides

Slides are placed in /slides and are markdown files consisting of slide content, separated by ---. They need to specify the following frontmatter block at the top of the file:

---
type: slides
---

The first and last slide use a special layout and will display the headline in the center of the slide. Speaker notes (in this case, the script) can be added at the end of a slide, prefixed by Notes:. They'll then be shown on the right next to the slides. Here's an example slides file:

---
type: slide
---

# Processing pipelines

Notes: This is a slide deck about processing pipelines.

---

# Next slide

- Some bullet points here
- And another bullet point

<img src="/image.jpg" alt="An image located in /static" />

Custom Elements

When using custom elements, make sure to place a newline between the opening/closing tags and the children. Otherwise, Markdown content may not render correctly.

<exercise>

Container of a single exercise.

Argument Type Description
id number / string Unique exercise ID within chapter.
title string Exercise title.
type string Optional type. "slides" makes container wider and adds icon.
children - The contents of the exercise.
<exercise id="1" title="Introduction to spaCy">

Content goes here...

</exercise>

<codeblock>

Argument Type Description
id number / string Unique identifier of the code exercise.
source string Name of the source file (without file extension). Defaults to exc_${id} if not set.
solution string Name of the solution file (without file extension). Defaults to solution_${id} if not set.
test string Name of the test file (without file extension). Defaults to test_${id} if not set.
children string Optional hints displayed when the user clicks "Show hints".
<codeblock id="02_03">

This is a hint!

</codeblock>

<slides>

Container to display slides interactively using Reveal.js and a Markdown file.

Argument Type Description
source string Name of slides file (without file extension).
<slides source="chapter1_01_introduction-to-spacy">
</slides>

<choice>

Container for multiple-choice question.

Argument Type Description
id string / number Optional unique ID. Can be used if more than one choice question is present in one exercise.
children nodes Only <opt> components for the options.
<choice>

<opt text="Option one">You have selected option one! This is not good.</opt>
<opt text="Option two" correct="true">Yay! </opt>

</choice>

<opt>

A multiple-choice option.

Argument Type Description
text string The option text to be displayed. Supports inline HTML.
correct string "true" if the option is the correct answer.
children string The text to be displayed if the option is selected (explaining why it's correct or incorrect).

More Repositories

1

spaCy

๐Ÿ’ซ Industrial-strength Natural Language Processing (NLP) in Python
Python
29,546
star
2

thinc

๐Ÿ”ฎ A refreshing functional take on deep learning, compatible with your favorite libraries
Python
2,813
star
3

sense2vec

๐Ÿฆ† Contextually-keyed word vectors
Python
1,615
star
4

spacy-models

๐Ÿ’ซ Models for the spaCy Natural Language Processing (NLP) library
Python
1,589
star
5

spacy-transformers

๐Ÿ›ธ Use pretrained transformers like BERT, XLNet and GPT-2 in spaCy
Python
1,334
star
6

projects

๐Ÿช End-to-end NLP workflows from prototype to production
Python
1,285
star
7

spacy-llm

๐Ÿฆ™ Integrating LLMs into structured NLP pipelines
Python
1,049
star
8

curated-transformers

๐Ÿค– A PyTorch library of curated Transformer models and their composable components
Python
858
star
9

spacy-streamlit

๐Ÿ‘‘ spaCy building blocks and visualizers for Streamlit apps
Python
787
star
10

spacy-stanza

๐Ÿ’ฅ Use the latest Stanza (StanfordNLP) research models directly in spaCy
Python
722
star
11

prodigy-recipes

๐Ÿณ Recipes for the Prodigy, our fully scriptable annotation tool
Jupyter Notebook
477
star
12

wasabi

๐Ÿฃ A lightweight console printing and formatting toolkit
Python
444
star
13

cymem

๐Ÿ’ฅ Cython memory pool for RAII-style memory management
Cython
436
star
14

srsly

๐Ÿฆ‰ Modern high-performance serialization utilities for Python (JSON, MessagePack, Pickle)
Python
422
star
15

displacy

๐Ÿ’ฅ displaCy.js: An open-source NLP visualiser for the modern web
JavaScript
343
star
16

lightnet

๐ŸŒ“ Bringing pjreddie's DarkNet out of the shadows #yolo
C
319
star
17

prodigy-openai-recipes

โœจ Bootstrap annotation with zero- & few-shot learning via OpenAI GPT-3
Python
318
star
18

spacy-notebooks

๐Ÿ’ซ Jupyter notebooks for spaCy examples and tutorials
Jupyter Notebook
285
star
19

spacy-services

๐Ÿ’ซ REST microservices for various spaCy-related tasks
Python
240
star
20

cython-blis

๐Ÿ’ฅ Fast matrix-multiplication as a self-contained Python library โ€“ no system dependencies!
C
215
star
21

displacy-ent

๐Ÿ’ฅ displaCy-ent.js: An open-source named entity visualiser for the modern web
CSS
197
star
22

jupyterlab-prodigy

๐Ÿงฌ A JupyterLab extension for annotating data with Prodigy
TypeScript
188
star
23

spacymoji

๐Ÿ’™ Emoji handling and meta data for spaCy with custom extension attributes
Python
180
star
24

tokenizations

Robust and Fast tokenizations alignment library for Rust and Python https://tamuhey.github.io/tokenizations/
Rust
180
star
25

wheelwright

๐ŸŽก Automated build repo for Python wheels and source packages
Python
174
star
26

catalogue

Super lightweight function registries for your library
Python
171
star
27

confection

๐Ÿฌ Confection: the sweetest config system for Python
Python
169
star
28

spacy-dev-resources

๐Ÿ’ซ Scripts, tools and resources for developing spaCy
Python
125
star
29

radicli

๐Ÿ•Š๏ธ Radically lightweight command-line interfaces
Python
100
star
30

spacy-lookups-data

๐Ÿ“‚ Additional lookup tables and data resources for spaCy
Python
98
star
31

spacy-experimental

๐Ÿงช Cutting-edge experimental spaCy components and features
Python
94
star
32

talks

๐Ÿ’ฅ Browser-based slides or PDFs of our talks and presentations
JavaScript
94
star
33

thinc-apple-ops

๐Ÿ Make Thinc faster on macOS by calling into Apple's native Accelerate library
Cython
90
star
34

healthsea

Healthsea is a spaCy pipeline for analyzing user reviews of supplementary products for their effects on health.
Python
87
star
35

preshed

๐Ÿ’ฅ Cython hash tables that assume keys are pre-hashed
Cython
82
star
36

weasel

๐Ÿฆฆ weasel: A small and easy workflow system
Python
62
star
37

spacy-huggingface-pipelines

๐Ÿ’ฅ Use Hugging Face text and token classification pipelines directly in spaCy
Python
61
star
38

spacy-ray

โ˜„๏ธ Parallel and distributed training with spaCy and Ray
Python
54
star
39

ml-datasets

๐ŸŒŠ Machine learning dataset loaders for testing and example scripts
Python
45
star
40

murmurhash

๐Ÿ’ฅ Cython bindings for MurmurHash2
C++
44
star
41

assets

๐Ÿ’ฅ Explosion Assets
43
star
42

spacy-huggingface-hub

๐Ÿค— Push your spaCy pipelines to the Hugging Face Hub
Python
42
star
43

wikid

Generate a SQLite database from Wikipedia & Wikidata dumps.
Python
30
star
44

vscode-prodigy

๐Ÿงฌ A VS Code extension for annotating data with Prodigy
TypeScript
30
star
45

spacy-alignments

๐Ÿ’ซ A spaCy package for Yohei Tamura's Rust tokenizations library
Python
26
star
46

spacy-vscode

spaCy extension for Visual Studio Code
Python
24
star
47

spacy-curated-transformers

spaCy entry points for Curated Transformers
Python
22
star
48

spacy-benchmarks

๐Ÿ’ซ Runtime performance comparison of spaCy against other NLP libraries
Python
20
star
49

prodigy-hf

Train huggingface models on top of Prodigy annotations
Python
19
star
50

prodigy-pdf

A Prodigy plugin for PDF annotation
Python
18
star
51

spacy-vectors-builder

๐ŸŒธ Train floret vectors
Python
17
star
52

os-signpost

Wrapper for the macOS signpost API
Cython
12
star
53

spacy-loggers

๐Ÿ“Ÿ Logging utilities for spaCy
Python
12
star
54

prodigy-evaluate

๐Ÿ”Ž A Prodigy plugin for evaluating spaCy pipelines
Python
12
star
55

prodigy-segment

Select pixels in Prodigy via Facebook's Segment-Anything model.
Python
11
star
56

curated-tokenizers

Lightweight piece tokenization library
Cython
11
star
57

conll-2012

A slightly cleaned up version of the scripts & data for the CoNLL 2012 Coreference task.
Python
10
star
58

thinc_gpu_ops

๐Ÿ”ฎ GPU kernels for Thinc
C++
9
star
59

prodigy-ann

A Prodigy pluging for ANN techniques
Python
4
star
60

prodigy-whisper

Audio transcription with OpenAI's whisper model in the loop.
Python
4
star
61

princetondh

Code for our presentation in Princeton DH 2023 April.
Jupyter Notebook
4
star
62

spacy-legacy

๐Ÿ•ธ๏ธ Legacy architectures and other registered spaCy v3.x functions for backwards-compatibility
Python
4
star
63

ec2buildwheel

Python
2
star
64

aiGrunn-2023

Materials for the aiGrunn 2023 talk on spaCy Transformer pipelines
Python
1
star
65

spacy-io-binder

๐Ÿ“’ Repository used to build Binder images for the interactive spaCy code examples
Jupyter Notebook
1
star
66

prodigy-lunr

A Prodigy plugin for document search via LUNR
Python
1
star
67

.github

:octocat: GitHub settings
1
star
68

span-labeling-datasets

Loaders for various span labeling datasets
Python
1
star
69

spacy-biaffine-parser

Python
1
star