• Stars
    star
    1,372
  • Rank 34,276 (Top 0.7 %)
  • Language
    Jupyter Notebook
  • License
    Apache License 2.0
  • Created about 3 years ago
  • Updated about 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Sequence Modeling with Structured State Spaces

Structured State Spaces for Sequence Modeling

This repository provides the official implementations and experiments for models related to S4, including HiPPO, LSSL, SaShiMi, DSS, HTTYH, S4D, and S4ND.

Project-specific information for each of these models, including overview of the source code and specific experiment reproductions, can be found under models/.

Table of Contents

Setting up the environment and porting S4 to external codebases:

Using this repository for training models:

Changelog

See CHANGELOG.md

Roadmap

  • More documentation for training from scratch using this repository
  • Compilation of S4 resources and implementations
  • pip package

Setup

Requirements

This repository requires Python 3.9+ and Pytorch 1.10+. It has been tested up to Pytorch 1.13.1. Other packages are listed in requirements.txt. Some care may be needed to make some of the library versions compatible, particularly torch/torchvision/torchaudio/torchtext.

Example installation:

conda install pytorch==1.13.1 torchvision==0.14.1 torchaudio==0.13.1 pytorch-cuda=11.6 -c pytorch -c nvidia
pip install -r requirements.txt

Structured Kernels

A core operation of S4 are the Cauchy and Vandermonde kernels described in the paper. These are very simple matrix multiplications; a naive implementation of these operation can be found in the standalone in the function cauchy_naive and log_vandermonde_naive. However, as the paper describes, this has suboptimal memory usage that currently requires a custom kernel to overcome in PyTorch.

Two more efficient methods are supported. The code will automatically detect if either of these is installed and call the appropriate kernel.

Custom CUDA Kernel

This version is faster but requires manual compilation for each machine environment. Run python setup.py install from the directory extensions/kernels/.

Pykeops

This version is provided by the pykeops library. Installation usually works out of the box with pip install pykeops cmake which are also listed in the requirements file.

Getting Started with S4

S4 Module

Self-contained files for the S4 layer and variants can be found in models/s4/, which includes instructions for calling the module.

See notebooks/ for visualizations explaining some concepts behind HiPPO and S4.

Example Train Script (External Usage)

example.py is a self-contained training script for MNIST and CIFAR that imports the standalone S4 file. The default settings python example.py reaches 88% accuracy on sequential CIFAR with a very simple S4D model of 200k parameters. This script can be used as an example for using S4 variants in external repositories.

Training with this Repository (Internal Usage)

This repository aims to provide a very flexible framework for training sequence models. Many models and datasets are supported.

The basic entrypoint is python -m train, or equivalently

python -m train pipeline=mnist model=s4

which trains an S4 model on the Permuted MNIST dataset. This should get to around 90% after 1 epoch which takes 1-3 minutes depending on GPU.

More examples of using this repository are documented throughout. See Training for an overview.

Optimizer Hyperparameters

One important feature of this codebase is supporting parameters that require different optimizer hyperparameters. In particular, the SSM kernel is particularly sensitive to the $(A, B)$ (and sometimes $\Delta$ parameters), so the learning rate on these parameters is sometimes lowered and the weight decay is always set to $0$.

See the method register in the model (e.g. s4d.py) and the function setup_optimizer in the training script (e.g. example.py) for an examples of how to implement this in external repos.

Training

The core training infrastructure of this repository is based on Pytorch-Lightning with a configuration scheme based on Hydra.

The main entrypoint is train.py and configs are found in configs/.

Data

Basic datasets are auto-downloaded, including MNIST, CIFAR, and Speech Commands. All logic for creating and loading datasets is in src/dataloaders directory. The README inside this subdirectory documents how to download and organize other datasets.

Models

Models are defined in src/models. See the README in this subdirectory for an overview.

Configs and Hyperparameters

Pre-defined configs reproducing end-to-end experiments from the papers are provided, found under project-specific information in models/, such as for the original S4 paper.

Configs can also be easily modified through the command line. An example experiment is

python -m train pipeline=mnist dataset.permute=True model=s4 model.n_layers=3 model.d_model=128 model.norm=batch model.prenorm=True wandb=null

This uses the Permuted MNIST task with an S4 model with a specified number of layers, backbone dimension, and normalization type.

See configs/README.md for more detailed documentation about the configs.

Hydra

It is recommended to read the Hydra documentation to fully understand the configuration framework. For help launching specific experiments, please file an issue.

Resuming

Each experiment will be logged to its own directory (generated by Hydra) of the form ./outputs/<date>/<time>/. Checkpoints will be saved here inside this folder and printed to console whenever a new checkpoint is created. To resume training, simply point to the desired .ckpt file (a PyTorch Lightning checkpoint, e.g. ./outputs/<date>/<time>/checkpoints/val/loss.ckpt) and append the flag train.ckpt=<path>/<to>/<checkpoint>.ckpt to the original training command.

PyTorch Lightning Trainer

The PTL Trainer class controls the overall training loop and also provides many useful pre-defined flags. Some useful examples are explained below. The full list of allowable flags can be found in the PTL documentation, as well as our trainer configs. See the default trainer config configs/trainer/default.yaml for the most useful options.

Multi-GPU training

Simply pass in trainer.gpus=2 to train with 2 GPUs.

Inspect model layers

trainer.weights_summary=full prints out every layer of the model with their parameter counts. Useful for debugging internals of models.

Data subsampling

trainer.limit_{train,val}_batches={10,0.1} trains (validates) on only 10 batches (0.1 fraction of all batches). Useful for testing the train loop without going through all the data.

WandB

Logging with WandB is built into this repository. In order to use this, simply set your WANDB_API_KEY environment variable, and change the wandb.project attribute of configs/config.yaml (or pass it on the command line e.g. python -m train .... wandb.project=s4).

Set wandb=null to turn off WandB logging.

Generation

Autoregressive generation can be performed with the generate.py script. This script can be used in two ways after training a model using this codebase.

Option 1: Checkpoint Path

The more flexible option requires the checkpoint path of the trained PyTorch Lightning model. The generation script accepts the same config options as the train script, with a few additional flags that are documented in configs/generate.yaml. After training with python -m train <train flags>, generate with

python -m generate <train flags> checkpoint_path=<path/to/model.ckpt> <generation flags>

Any of the flags found in the config can be overridden.

Note: This option can be used with either .ckpt checkpoints (PyTorch Lightning, which includes information for the Trainer) or .pt checkpoints (PyTorch, which is just a model state dict).

Option 2: Experiment Path

The second option for generation does not require passing in training flags again, and instead reads the config from the Hydra experiment folder, along with a PyTorch Lightning checkpoint within the experiment folder.

Example 1 (Language)

Download the WikiText-103 model checkpoint, for example to ./checkpoints/s4-wt103.pt. This model was trained with the command python -m train experiment=lm/s4-wt103. Note that from the config we can see that the model was trained with a receptive field of length 8192.

To generate, run

python -m generate experiment=lm/s4-wt103 checkpoint_path=checkpoints/s4-wt103.pt n_samples=1 l_sample=16384 l_prefix=8192 decode=text

This generates a sample of length 16384 conditioned on a prefix of length 8192.

Example 2 (Audio)

Let's train a small SaShiMi model on the SC09 dataset. We can also reduce the number of training and validation batches to get a checkpoint faster:

python -m train experiment=audio/sashimi-sc09 model.n_layers=2 trainer.limit_train_batches=0.1 trainer.limit_val_batches=0.1

After the first epoch completes, a message is printed indicating where the checkpoint is saved.

Epoch 0, global step 96: val/loss reached 3.71754 (best 3.71754), saving model to "<repository>/outputs/<date>/<time>/checkpoints/val/loss.ckpt"

Option 1:

python -m generate experiment=audio/sashimi-sc09 model.n_layers=2 checkpoint_path=<repository>/outputs/<date>/<time>/checkpoints/val/loss.ckpt n_samples=4 l_sample=16000

This option redefines the full config so that the model and dataset can be constructed.

Option 2:

python -m generate experiment_path=<repository>/outputs/<date>/<time> checkpoint_path=checkpoints/val/loss.ckpt n_samples=4 l_sample=16000

This option only needs the path to the Hydra experiment folder and the desired checkpoint within.

Overall Repository Structure

configs/         Config files for model, data pipeline, training loop, etc.
data/            Default location of raw data
extensions/      CUDA extensions (Cauchy and Vandermonde kernels)
src/             Main source code for models, datasets, etc.
  callbacks/     Training loop utilities (e.g. checkpointing)
  dataloaders/   Dataset and dataloader definitions
  models/        Model definitions
  tasks/         Encoder/decoder modules to interface between data and model backbone
  utils/
models/          Model-specific information (code, experiments, additional resources)
example.py       Example training script for using S4 externally
train.py         Training entrypoint for this repo
generate.py      Autoregressive generation script

Citation

If you use this codebase, or otherwise found our work valuable, please cite S4 and other relevant papers.

@inproceedings{gu2022efficiently,
  title={Efficiently Modeling Long Sequences with Structured State Spaces},
  author={Gu, Albert and Goel, Karan and R\'e, Christopher},
  booktitle={The International Conference on Learning Representations ({ICLR})},
  year={2022}
}

More Repositories

1

flash-attention

Fast and memory-efficient exact attention
Python
3,673
star
2

deepdive

DeepDive
Shell
1,957
star
3

ThunderKittens

Tile primitives for speedy kernels
Cuda
1,555
star
4

data-centric-ai

Resources for Data Centric AI
TeX
1,099
star
5

safari

Convolutions for Sequence Modeling
Assembly
867
star
6

meerkat

Creative interactive views of any dataset.
Python
826
star
7

hgcn

Hyperbolic Graph Convolutional Networks in PyTorch.
Python
597
star
8

hyena-dna

Official implementation for HyenaDNA, a long-range genomic foundation model built with Hyena
Assembly
585
star
9

ama_prompting

Ask Me Anything language model prompting
Python
538
star
10

m2

Repo for "Monarch Mixer: A Simple Sub-Quadratic GEMM-Based Architecture"
Assembly
535
star
11

H3

Language Modeling with the H3 State Space Model
Assembly
513
star
12

evaporate

This repo contains data and code for the paper "Language Models Enable Simple Systems for Generating Structured Views of Heterogeneous Data Lakes"
Python
479
star
13

manifest

Prompt programming with FMs.
Python
440
star
14

pdftotree

🌲 A tool for converting PDF into hOCR with text, tables, and figures being recognized and preserved.
Python
431
star
15

metal

Snorkel MeTaL: A framework for training models with multi-task weak supervision
Python
423
star
16

fonduer

A knowledge base construction engine for richly formatted data
Python
408
star
17

aisys-building-blocks

Building blocks for foundation models.
377
star
18

hyperbolics

Hyperbolic Embeddings
Python
372
star
19

legalbench

An open science effort to benchmark legal reasoning in foundation models
Python
341
star
20

flyingsquid

More interactive weak supervision with FlyingSquid
Python
313
star
21

flash-fft-conv

FlashFFTConv: Efficient Convolutions for Long Sequences with Tensor Cores
C++
276
star
22

KGEmb

Hyperbolic Knowledge Graph embeddings.
Python
249
star
23

bootleg

Self-Supervision for Named Entity Disambiguation at the Tail
Python
213
star
24

based

Code for exploring Based models from "Simple linear attention language models balance the recall-throughput tradeoff"
Python
209
star
25

HypHC

Hyperbolic Hierarchical Clustering.
Python
192
star
26

fly

Python
191
star
27

TART

TART: A plug-and-play Transformer module for task-agnostic reasoning
Python
190
star
28

tanda

Learning to Compose Domain-Specific Transformations for Data Augmentation
Python
171
star
29

hippo-code

Python
166
star
30

butterfly

Butterfly matrix multiplication in PyTorch
Python
164
star
31

spacetime

Code for SpaceTime 🌌⏱️. Proposed in Effectively Modeling Time Series with Simple Discrete State Spaces, ICLR 2023.
Python
163
star
32

zoology

Understand and test language model architectures on synthetic tasks.
Python
160
star
33

lolcats

Repo for "LoLCATs: On Low-Rank Linearizing of Large Language Models"
Python
154
star
34

babble

A system for generating training labels via natural language explanations
Python
146
star
35

EmptyHeaded

Your worst case is our best case.
C++
138
star
36

domino

Python
134
star
37

blocking-tutorial

C++
132
star
38

mindbender

Tools for iterative knowledge base development with DeepDive
CoffeeScript
117
star
39

reef

Automatically labeling training data
Jupyter Notebook
106
star
40

fm_data_tasks

Foundation Models for Data Tasks
Python
100
star
41

fonduer-tutorials

A collection of simple tutorials for using Fonduer
Jupyter Notebook
100
star
42

eclair-agents

Automating enterprise workflows with multimodal agents
Jupyter Notebook
92
star
43

TreeStructure

Table Extraction Tool
Jupyter Notebook
90
star
44

CaffeConTroll

C++
76
star
45

epoxy

Interactive Model Iteration with Weak Supervision and Pre-Trained Embeddings
Python
76
star
46

HoroPCA

Hyperbolic PCA via Horospherical Projections
Python
68
star
47

structured-nets

Structured matrices for compressing neural networks
Python
66
star
48

hidden-stratification

Combating hidden stratification with GEORGE
Jupyter Notebook
62
star
49

numbskull

Numba-based version of DimmWitted Gibbs sampler
Python
46
star
50

prefix-linear-attention

Python
44
star
51

model-patching

Model Patching: Closing the Subgroup Performance Gap with Data Augmentation
Python
42
star
52

skill-it

Skill-It! A Data-Driven Skills Framework for Understanding and Training Language Models
Jupyter Notebook
41
star
53

cs145-notebooks-2016

Public materials for the Fall 2016 offering of CS145
Jupyter Notebook
35
star
54

mandoline

(ICML 2021) Mandoline: Model Evaluation under Distribution Shift
Python
31
star
55

mongoose

A Learnable LSH Framework for Efficient NN Training
Python
30
star
56

thanos-code

Code release for the paper Perfectly Balanced: Improving Transfer and Robustness of Supervised Contrastive Learning
Python
28
star
57

ukb-cardiac-mri

Weakly Supervised MRI Series Classification for the UK Biobank
Python
25
star
58

tuffy

Tuffy, a Markov Logic Network solver
Java
24
star
59

snorkel-superglue

Applying Snorkel to SuperGLUE
Jupyter Notebook
23
star
60

correct-n-contrast

Official code repository for Correct-N-Contrast
Python
21
star
61

ludwig-benchmarking-toolkit

Ludwig benchmark
Python
19
star
62

smallfry

Python
19
star
63

tabi

Code release for Type-Aware Bi-Encoders for Open-Domain Entity Retrieval
Python
19
star
64

lp_rffs

Low precision random Fourier features for kernel approximation
Python
19
star
65

ddlog

Compiler for writing DeepDive applications in a Datalog-like language — ⚠️🚧🛑 REPO MOVED TO DEEPDIVE 👇🏿
Scala
19
star
66

wonderbread

WONDERBREAD benchmark + dataset for BPM tasks
Jupyter Notebook
19
star
67

augmentation_code

Reproducible code for Augmentation paper
Python
18
star
68

sampler

DimmWitted Gibbs Sampler in C++ — ⚠️🚧🛑 REPO MOVED TO DEEPDIVE 👉🏿
C++
17
star
69

random_embedding

Python
16
star
70

snorkel-biocorpus

Python
16
star
71

ddbiolib

DeepDive Biomedical Tools
Python
15
star
72

bazaar

JavaScript
14
star
73

Omnivore

Omnivore Optimizer and Distributed CcT
C++
13
star
74

anchor-stability

A study of the downstream instability of word embeddings
Jupyter Notebook
12
star
75

medical-ned-integration

Cross-domain data integration for named entity disambiguation in biomedical text
Python
11
star
76

dd-genomics

The Genomics DeepDive project
Python
11
star
77

embroid

Embroid: Unsupervised Prediction Smoothing Can Improve Few-Shot Classification
Jupyter Notebook
11
star
78

torchhalp

Python
10
star
79

dimmwitted

C++
10
star
80

Accelerated-PCA

Accelerated Stochastic Power Iteration with Momentum
Jupyter Notebook
9
star
81

liger

Liger: Fusing Weak Supervision and Model Embeddings
Python
9
star
82

cross-modal-ws-demo

HTML
9
star
83

hyperE

HTML
8
star
84

treedlib

Jupyter Notebook
8
star
85

ivy-tutorial

An Introductory Tutorial for Ivy
Jupyter Notebook
7
star
86

observational

Observational Supervision for Medical Image Classification using Gaze Data
Jupyter Notebook
7
star
87

chinstrap

C++
6
star
88

quadrature-features

Code to generate kernel features using Gaussian quadrature
Python
6
star
89

icij-maude

Weakly supervised classification of adverse event reports from the FDA's MAUDE database.
Python
6
star
90

librarian

DeepDive Librarian for managing all data sets we publish and receive
Python
3
star
91

halp

Python
3
star
92

bert-pretraining

Python
2
star
93

d3m-model-search

D3M Model Search Component
Python
2
star
94

elementary

Data services and APIs
Python
1
star
95

dependency_model

Structure learning code from [ICML'19 paper](https://arxiv.org/abs/1903.05844)
Python
1
star