• Stars
    star
    169
  • Rank 224,453 (Top 5 %)
  • Language
    Python
  • License
    MIT License
  • Created over 2 years ago
  • Updated 6 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

๐Ÿฌ Confection: the sweetest config system for Python

Confection: The sweetest config system for Python

confection ๐Ÿฌ is a lightweight library that offers a configuration system letting you conveniently describe arbitrary trees of objects.

Configuration is a huge challenge for machine-learning code because you may want to expose almost any detail of any function as a hyperparameter. The setting you want to expose might be arbitrarily far down in your call stack, so it might need to pass all the way through the CLI or REST API, through any number of intermediate functions, affecting the interface of everything along the way. And then once those settings are added, they become hard to remove later. Default values also become hard to change without breaking backwards compatibility.

To solve this problem, confection offers a config system that lets you easily describe arbitrary trees of objects. The objects can be created via function calls you register using a simple decorator syntax. You can even version the functions you create, allowing you to make improvements without breaking backwards compatibility. The most similar config system weโ€™re aware of is Gin, which uses a similar syntax, and also allows you to link the configuration system to functions in your code using a decorator. confection's config system is simpler and emphasizes a different workflow via a subset of Ginโ€™s functionality.

tests Current Release Version pypi Version conda Version Code style: black

โณ Installation

pip install confection
conda install -c conda-forge confection

๐Ÿ‘ฉโ€๐Ÿ’ป Usage

The configuration system parses a .cfg file like

[training]
patience = 10
dropout = 0.2
use_vectors = false

[training.logging]
level = "INFO"

[nlp]
# This uses the value of training.use_vectors
use_vectors = ${training.use_vectors}
lang = "en"

and resolves it to a Dict:

{
  "training": {
    "patience": 10,
    "dropout": 0.2,
    "use_vectors": false,
    "logging": {
      "level": "INFO"
    }
  },
  "nlp": {
    "use_vectors": false,
    "lang": "en"
  }
}

The config is divided into sections, with the section name in square brackets โ€“ for example, [training]. Within the sections, config values can be assigned to keys using =. Values can also be referenced from other sections using the dot notation and placeholders indicated by the dollar sign and curly braces. For example, ${training.use_vectors} will receive the value of use_vectors in the training block. This is useful for settings that are shared across components.

The config format has three main differences from Pythonโ€™s built-in configparser:

  1. JSON-formatted values. confection passes all values through json.loads to interpret them. You can use atomic values like strings, floats, integers or booleans, or you can use complex objects such as lists or maps.
  2. Structured sections. confection uses a dot notation to build nested sections. If you have a section named [section.subsection], confection will parse that into a nested structure, placing subsection within section.
  3. References to registry functions. If a key starts with @, confection will interpret its value as the name of a function registry, load the function registered for that name and pass in the rest of the block as arguments. If type hints are available on the function, the argument values (and return value of the function) will be validated against them. This lets you express complex configurations, like a training pipeline where batch_size is populated by a function that yields floats.

Thereโ€™s no pre-defined scheme you have to follow; how you set up the top-level sections is up to you. At the end of it, youโ€™ll receive a dictionary with the values that you can use in your script โ€“ whether itโ€™s complete initialized functions, or just basic settings.

For instance, letโ€™s say you want to define a new optimizer. You'd define its arguments in config.cfg like so:

[optimizer]
@optimizers = "my_cool_optimizer.v1"
learn_rate = 0.001
gamma = 1e-8

To load and parse this configuration using a catalogue registry (install catalogue separately):

import dataclasses
from typing import Union, Iterable
import catalogue
from confection import registry, Config

# Create a new registry.
registry.optimizers = catalogue.create("confection", "optimizers", entry_points=False)


# Define a dummy optimizer class.
@dataclasses.dataclass
class MyCoolOptimizer:
    learn_rate: float
    gamma: float


@registry.optimizers.register("my_cool_optimizer.v1")
def make_my_optimizer(learn_rate: Union[float, Iterable[float]], gamma: float):
    return MyCoolOptimizer(learn_rate, gamma)


# Load the config file from disk, resolve it and fetch the instantiated optimizer object.
config = Config().from_disk("./config.cfg")
resolved = registry.resolve(config)
optimizer = resolved["optimizer"]  # MyCoolOptimizer(learn_rate=0.001, gamma=1e-08)

Under the hood, confection will look up the "my_cool_optimizer.v1" function in the "optimizers" registry and then call it with the arguments learn_rate and gamma. If the function has type annotations, it will also validate the input. For instance, if learn_rate is annotated as a float and the config defines a string, confection will raise an error.

The Thinc documentation offers further information on the configuration system:

๐ŸŽ› API

class Config

This class holds the model and training configuration and can load and save the INI-style configuration format from/to a string, file or bytes. The Config class is a subclass of dict and uses Pythonโ€™s ConfigParser under the hood.

method Config.__init__

Initialize a new Config object with optional data.

from confection import Config
config = Config({"training": {"patience": 10, "dropout": 0.2}})
Argument Type Description
data Optional[Union[Dict[str, Any], Config]] Optional data to initialize the config with.
section_order Optional[List[str]] Top-level section names, in order, used to sort the saved and loaded config. All other sections will be sorted alphabetically.
is_interpolated Optional[bool] Whether the config is interpolated or whether it contains variables. Read from the data if itโ€™s an instance of Config and otherwise defaults to True.

method Config.from_str

Load the config from a string.

from confection import Config

config_str = """
[training]
patience = 10
dropout = 0.2
"""
config = Config().from_str(config_str)
print(config["training"])  # {'patience': 10, 'dropout': 0.2}}
Argument Type Description
text str The string config to load.
interpolate bool Whether to interpolate variables like ${section.key}. Defaults to True.
overrides Dict[str, Any] Overrides for values and sections. Keys are provided in dot notation, e.g. "training.dropout" mapped to the value.
RETURNS Config The loaded config.

method Config.to_str

Load the config from a string.

from confection import Config

config = Config({"training": {"patience": 10, "dropout": 0.2}})
print(config.to_str()) # '[training]\npatience = 10\n\ndropout = 0.2'
Argument Type Description
interpolate bool Whether to interpolate variables like ${section.key}. Defaults to True.
RETURNS str The string config.

method Config.to_bytes

Serialize the config to a byte string.

from confection import Config

config = Config({"training": {"patience": 10, "dropout": 0.2}})
config_bytes = config.to_bytes()
print(config_bytes)  # b'[training]\npatience = 10\n\ndropout = 0.2'
Argument Type Description
interpolate bool Whether to interpolate variables like ${section.key}. Defaults to True.
overrides Dict[str, Any] Overrides for values and sections. Keys are provided in dot notation, e.g. "training.dropout" mapped to the value.
RETURNS str The serialized config.

method Config.from_bytes

Load the config from a byte string.

from confection import Config

config = Config({"training": {"patience": 10, "dropout": 0.2}})
config_bytes = config.to_bytes()
new_config = Config().from_bytes(config_bytes)
Argument Type Description
bytes_data bool The data to load.
interpolate bool Whether to interpolate variables like ${section.key}. Defaults to True.
RETURNS Config The loaded config.

method Config.to_disk

Serialize the config to a file.

from confection import Config

config = Config({"training": {"patience": 10, "dropout": 0.2}})
config.to_disk("./config.cfg")
Argument Type Description
path Union[Path, str] The file path.
interpolate bool Whether to interpolate variables like ${section.key}. Defaults to True.

method Config.from_disk

Load the config from a file.

from confection import Config

config = Config({"training": {"patience": 10, "dropout": 0.2}})
config.to_disk("./config.cfg")
new_config = Config().from_disk("./config.cfg")
Argument Type Description
path Union[Path, str] The file path.
interpolate bool Whether to interpolate variables like ${section.key}. Defaults to True.
overrides Dict[str, Any] Overrides for values and sections. Keys are provided in dot notation, e.g. "training.dropout" mapped to the value.
RETURNS Config The loaded config.

method Config.copy

Deep-copy the config.

Argument Type Description
RETURNS Config The copied config.

method Config.interpolate

Interpolate variables like ${section.value} or ${section.subsection} and return a copy of the config with interpolated values. Can be used if a config is loaded with interpolate=False, e.g. via Config.from_str.

from confection import Config

config_str = """
[hyper_params]
dropout = 0.2

[training]
dropout = ${hyper_params.dropout}
"""
config = Config().from_str(config_str, interpolate=False)
print(config["training"])  # {'dropout': '${hyper_params.dropout}'}}
config = config.interpolate()
print(config["training"])  # {'dropout': 0.2}}
Argument Type Description
RETURNS Config A copy of the config with interpolated values.
method Config.merge

Deep-merge two config objects, using the current config as the default. Only merges sections and dictionaries and not other values like lists. Values that are provided in the updates are overwritten in the base config, and any new values or sections are added. If a config value is a variable like ${section.key} (e.g. if the config was loaded with interpolate=False), the variable is preferred, even if the updates provide a different value. This ensures that variable references arenโ€™t destroyed by a merge.

โš ๏ธ Note that blocks that refer to registered functions using the @ syntax are only merged if they are referring to the same functions. Otherwise, merging could easily produce invalid configs, since different functions can take different arguments. If a block refers to a different function, itโ€™s overwritten.

from confection import Config

base_config_str = """
[training]
patience = 10
dropout = 0.2
"""
update_config_str = """
[training]
dropout = 0.1
max_epochs = 2000
"""

base_config = Config().from_str(base_config_str)
update_config = Config().from_str(update_config_str)
merged = Config(base_config).merge(update_config)
print(merged["training"])  # {'patience': 10, 'dropout': 0.1, 'max_epochs': 2000}
Argument Type Description
overrides Union[Dict[str, Any], Config] The updates to merge into the config.
RETURNS Config A new config instance containing the merged config.

Config Attributes

Argument Type Description
is_interpolated bool Whether the config values have been interpolated. Defaults to True and is set to False if a config is loaded with interpolate=False, e.g. using Config.from_str.

More Repositories

1

spaCy

๐Ÿ’ซ Industrial-strength Natural Language Processing (NLP) in Python
Python
29,546
star
2

thinc

๐Ÿ”ฎ A refreshing functional take on deep learning, compatible with your favorite libraries
Python
2,813
star
3

spacy-course

๐Ÿ‘ฉโ€๐Ÿซ Advanced NLP with spaCy: A free online course
Python
2,299
star
4

sense2vec

๐Ÿฆ† Contextually-keyed word vectors
Python
1,615
star
5

spacy-models

๐Ÿ’ซ Models for the spaCy Natural Language Processing (NLP) library
Python
1,589
star
6

spacy-transformers

๐Ÿ›ธ Use pretrained transformers like BERT, XLNet and GPT-2 in spaCy
Python
1,334
star
7

projects

๐Ÿช End-to-end NLP workflows from prototype to production
Python
1,285
star
8

spacy-llm

๐Ÿฆ™ Integrating LLMs into structured NLP pipelines
Python
1,049
star
9

curated-transformers

๐Ÿค– A PyTorch library of curated Transformer models and their composable components
Python
858
star
10

spacy-streamlit

๐Ÿ‘‘ spaCy building blocks and visualizers for Streamlit apps
Python
787
star
11

spacy-stanza

๐Ÿ’ฅ Use the latest Stanza (StanfordNLP) research models directly in spaCy
Python
722
star
12

prodigy-recipes

๐Ÿณ Recipes for the Prodigy, our fully scriptable annotation tool
Jupyter Notebook
477
star
13

wasabi

๐Ÿฃ A lightweight console printing and formatting toolkit
Python
444
star
14

cymem

๐Ÿ’ฅ Cython memory pool for RAII-style memory management
Cython
436
star
15

srsly

๐Ÿฆ‰ Modern high-performance serialization utilities for Python (JSON, MessagePack, Pickle)
Python
422
star
16

displacy

๐Ÿ’ฅ displaCy.js: An open-source NLP visualiser for the modern web
JavaScript
343
star
17

lightnet

๐ŸŒ“ Bringing pjreddie's DarkNet out of the shadows #yolo
C
319
star
18

prodigy-openai-recipes

โœจ Bootstrap annotation with zero- & few-shot learning via OpenAI GPT-3
Python
318
star
19

spacy-notebooks

๐Ÿ’ซ Jupyter notebooks for spaCy examples and tutorials
Jupyter Notebook
285
star
20

spacy-services

๐Ÿ’ซ REST microservices for various spaCy-related tasks
Python
240
star
21

cython-blis

๐Ÿ’ฅ Fast matrix-multiplication as a self-contained Python library โ€“ no system dependencies!
C
215
star
22

displacy-ent

๐Ÿ’ฅ displaCy-ent.js: An open-source named entity visualiser for the modern web
CSS
197
star
23

jupyterlab-prodigy

๐Ÿงฌ A JupyterLab extension for annotating data with Prodigy
TypeScript
188
star
24

spacymoji

๐Ÿ’™ Emoji handling and meta data for spaCy with custom extension attributes
Python
180
star
25

tokenizations

Robust and Fast tokenizations alignment library for Rust and Python https://tamuhey.github.io/tokenizations/
Rust
180
star
26

wheelwright

๐ŸŽก Automated build repo for Python wheels and source packages
Python
174
star
27

catalogue

Super lightweight function registries for your library
Python
171
star
28

spacy-dev-resources

๐Ÿ’ซ Scripts, tools and resources for developing spaCy
Python
125
star
29

radicli

๐Ÿ•Š๏ธ Radically lightweight command-line interfaces
Python
100
star
30

spacy-lookups-data

๐Ÿ“‚ Additional lookup tables and data resources for spaCy
Python
98
star
31

spacy-experimental

๐Ÿงช Cutting-edge experimental spaCy components and features
Python
94
star
32

talks

๐Ÿ’ฅ Browser-based slides or PDFs of our talks and presentations
JavaScript
94
star
33

thinc-apple-ops

๐Ÿ Make Thinc faster on macOS by calling into Apple's native Accelerate library
Cython
90
star
34

healthsea

Healthsea is a spaCy pipeline for analyzing user reviews of supplementary products for their effects on health.
Python
87
star
35

preshed

๐Ÿ’ฅ Cython hash tables that assume keys are pre-hashed
Cython
82
star
36

weasel

๐Ÿฆฆ weasel: A small and easy workflow system
Python
62
star
37

spacy-huggingface-pipelines

๐Ÿ’ฅ Use Hugging Face text and token classification pipelines directly in spaCy
Python
61
star
38

spacy-ray

โ˜„๏ธ Parallel and distributed training with spaCy and Ray
Python
54
star
39

ml-datasets

๐ŸŒŠ Machine learning dataset loaders for testing and example scripts
Python
45
star
40

murmurhash

๐Ÿ’ฅ Cython bindings for MurmurHash2
C++
44
star
41

assets

๐Ÿ’ฅ Explosion Assets
43
star
42

spacy-huggingface-hub

๐Ÿค— Push your spaCy pipelines to the Hugging Face Hub
Python
42
star
43

wikid

Generate a SQLite database from Wikipedia & Wikidata dumps.
Python
30
star
44

vscode-prodigy

๐Ÿงฌ A VS Code extension for annotating data with Prodigy
TypeScript
30
star
45

spacy-alignments

๐Ÿ’ซ A spaCy package for Yohei Tamura's Rust tokenizations library
Python
26
star
46

spacy-vscode

spaCy extension for Visual Studio Code
Python
24
star
47

spacy-curated-transformers

spaCy entry points for Curated Transformers
Python
22
star
48

spacy-benchmarks

๐Ÿ’ซ Runtime performance comparison of spaCy against other NLP libraries
Python
20
star
49

prodigy-hf

Train huggingface models on top of Prodigy annotations
Python
19
star
50

prodigy-pdf

A Prodigy plugin for PDF annotation
Python
18
star
51

spacy-vectors-builder

๐ŸŒธ Train floret vectors
Python
17
star
52

os-signpost

Wrapper for the macOS signpost API
Cython
12
star
53

spacy-loggers

๐Ÿ“Ÿ Logging utilities for spaCy
Python
12
star
54

prodigy-evaluate

๐Ÿ”Ž A Prodigy plugin for evaluating spaCy pipelines
Python
12
star
55

prodigy-segment

Select pixels in Prodigy via Facebook's Segment-Anything model.
Python
11
star
56

curated-tokenizers

Lightweight piece tokenization library
Cython
11
star
57

conll-2012

A slightly cleaned up version of the scripts & data for the CoNLL 2012 Coreference task.
Python
10
star
58

thinc_gpu_ops

๐Ÿ”ฎ GPU kernels for Thinc
C++
9
star
59

prodigy-ann

A Prodigy pluging for ANN techniques
Python
4
star
60

prodigy-whisper

Audio transcription with OpenAI's whisper model in the loop.
Python
4
star
61

princetondh

Code for our presentation in Princeton DH 2023 April.
Jupyter Notebook
4
star
62

spacy-legacy

๐Ÿ•ธ๏ธ Legacy architectures and other registered spaCy v3.x functions for backwards-compatibility
Python
4
star
63

ec2buildwheel

Python
2
star
64

aiGrunn-2023

Materials for the aiGrunn 2023 talk on spaCy Transformer pipelines
Python
1
star
65

spacy-io-binder

๐Ÿ“’ Repository used to build Binder images for the interactive spaCy code examples
Jupyter Notebook
1
star
66

prodigy-lunr

A Prodigy plugin for document search via LUNR
Python
1
star
67

.github

:octocat: GitHub settings
1
star
68

span-labeling-datasets

Loaders for various span labeling datasets
Python
1
star
69

spacy-biaffine-parser

Python
1
star