Introduction
The brain is the perfect place to look for inspiration to develop more efficient neural networks. One of the main differences with modern deep learning is that the brain encodes information in spikes rather than continuous activations. snnTorch is a Python package for performing gradient-based learning with spiking neural networks. It extends the capabilities of PyTorch, taking advantage of its GPU accelerated tensor computation and applying it to networks of spiking neurons. Pre-designed spiking neuron models are seamlessly integrated within the PyTorch framework and can be treated as recurrent activation units.
If you like this project, please consider starring
If you have issues, comments, or are looking for advice on training spiking neural networks, you can open an issue, a discussion, or chat in our discord channel.
snnTorch Structure
snnTorch contains the following components:
Component | Description |
---|---|
snntorch | a spiking neuron library like torch.nn, deeply integrated with autograd |
snntorch.functional | common arithmetic operations on spikes, e.g., loss, regularization etc. |
snntorch.spikegen | a library for spike generation and data conversion |
snntorch.spikeplot | visualization tools for spike-based data using matplotlib and celluloid |
snntorch.surrogate | optional surrogate gradient functions |
snntorch.utils | dataset utility functions |
snnTorch is designed to be intuitively used with PyTorch, as though each spiking neuron were simply another activation in a sequence of layers. It is therefore agnostic to fully-connected layers, convolutional layers, residual connections, etc.
At present, the neuron models are represented by recursive functions which removes the need to store membrane potential traces for all neurons in a system in order to calculate the gradient. The lean requirements of snnTorch enable small and large networks to be viably trained on CPU, where needed. Provided that the network models and tensors are loaded onto CUDA, snnTorch takes advantage of GPU acceleration in the same way as PyTorch.
Citation
If you find snnTorch useful in your work, please cite the following source:
@article{eshraghian2021training,
title = {Training spiking neural networks using lessons from deep learning},
author = {Eshraghian, Jason K and Ward, Max and Neftci, Emre and Wang, Xinxin
and Lenz, Gregor and Dwivedi, Girish and Bennamoun, Mohammed and
Jeong, Doo Seok and Lu, Wei D},
journal = {arXiv preprint arXiv:2109.12894},
year = {2021}
}
Let us know if you are using snnTorch in any interesting work, research or blogs, as we would love to hear more about it! Reach out at [email protected].
Requirements
The following packages need to be installed to use snnTorch:
- torch >= 1.1.0
- numpy >= 1.17
- pandas
- matplotlib
- math
They are automatically installed if snnTorch is installed using the pip command. Ensure the correct version of torch is installed for your system to enable CUDA compatibility.
Installation
Run the following to install:
$ python
$ pip install snntorch
To install snnTorch from source instead:
$ git clone https://github.com/jeshraghian/snnTorch $ cd snntorch $ python setup.py install
To install snntorch with conda:
$ conda install -c conda-forge snntorch
To install for an Intelligent Processing Units (IPU) based build using Graphcore's accelerators:
$ pip install snntorch-ipu
API & Examples
A complete API is available here. Examples, tutorials and Colab notebooks are provided.
Quickstart
Here are a few ways you can get started with snnTorch:
For a quick example to run snnTorch, see the following snippet, or test the quickstart notebook:
import torch, torch.nn as nn
import snntorch as snn
from snntorch import surrogate
from snntorch import utils
num_steps = 25 # number of time steps
batch_size = 1
beta = 0.5 # neuron decay rate
spike_grad = surrogate.fast_sigmoid() # surrogate gradient
net = nn.Sequential(
nn.Conv2d(1, 8, 5),
nn.MaxPool2d(2),
snn.Leaky(beta=beta, init_hidden=True, spike_grad=spike_grad),
nn.Conv2d(8, 16, 5),
nn.MaxPool2d(2),
snn.Leaky(beta=beta, init_hidden=True, spike_grad=spike_grad),
nn.Flatten(),
nn.Linear(16 * 4 * 4, 10),
snn.Leaky(beta=beta, init_hidden=True, spike_grad=spike_grad, output=True)
)
data_in = torch.rand(num_steps, batch_size, 1, 28, 28) # random input data
spike_recording = [] # record spikes over time
utils.reset(net) # reset/initialize hidden states for all neurons
for step in range(num_steps): # loop over time
spike, state = net(data_in[step]) # one time step of forward-pass
spike_recording.append(spike) # record spikes in list
A Deep Dive into SNNs
If you wish to learn all the fundamentals of training spiking neural networks, from neuron models, to the neural code, up to backpropagation, the snnTorch tutorial series is a great place to begin. It consists of interactive notebooks with complete explanations that can get you up to speed.
Tutorial | Title | Colab Link |
---|---|---|
Tutorial 1 | Spike Encoding with snnTorch | |
Tutorial 2 | The Leaky Integrate and Fire Neuron | |
Tutorial 3 | A Feedforward Spiking Neural Network | |
Tutorial 4 | 2nd Order Spiking Neuron Models (Optional) | |
Tutorial 5 | Training Spiking Neural Networks with snnTorch | |
Tutorial 6 | Surrogate Gradient Descent in a Convolutional SNN | |
Tutorial 7 | Neuromorphic Datasets with Tonic + snnTorch |
Intelligent Processing Unit (IPU) Acceleration
snnTorch has been optimized for Graphcore's IPU accelerators. To install an IPU based build of snnTorch:
$ pip install snntorch-ipu
Low-level custom operations for IPU compatibility will be automatically compiled when import snntorch
is called for the first time.
When updating the Poplar SDK, these operations may need to be recompiled.
This can be done by reinstalling snntorch-ipu
, or deleting files in the base directory with an .so extension.
The snntorch.backprop
module, and several functions from snntorch.functional
and snntorch.surrogate
, are incompatible with IPUs, but can be recreated using PyTorch primitives.
Additional requirements include:
- poptorch
- The Poplar SDK
Refer to Graphcore's documentation for installation instructions of poptorch and the Poplar SDK.
The homepage for the snnTorch IPU project can be found here. A tutorial for training SNNs is provided here.
Contributing
If you're ready to contribute to snnTorch, instructions to do so can be found here.
Acknowledgments
snnTorch is currently maintained by the UCSC Neuromorphic Computing Group. It was initially developed by Jason K. Eshraghian in the Lu Group (University of Michigan).
Additional contributions were made by Vincent Sun, Peng Zhou, Ridger Zhu, Alexander Henkes, Xinxin Wang, and Emre Neftci.
License & Copyright
snnTorch source code is published under the terms of the MIT License. snnTorch's documentation is licensed under a Creative Commons Attribution-Share Alike 3.0 Unported License (CC BY-SA 3.0).