• Stars
    star
    425
  • Rank 98,261 (Top 2 %)
  • Language
    Jupyter Notebook
  • License
    MIT License
  • Created about 2 years ago
  • Updated over 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Reproducing Yann LeCun 1989 paper "Backpropagation Applied to Handwritten Zip Code Recognition", to my knowledge the earliest real-world application of a neural net trained with backpropagation.

lecun1989-repro

teaser

This code tries to reproduce the 1989 Yann LeCun et al. paper: Backpropagation Applied to Handwritten Zip Code Recognition. To my knowledge this is the earliest real-world application of a neural net trained with backpropagation (now 33 years ago).

run

Since we don't have the exact dataset that was used in the paper, we take MNIST and randomly pick examples from it to generate an approximation of the dataset, which contains only 7291 training and 2007 testing digits, only of size 16x16 pixels (standard MNIST is 28x28).

$ python prepro.py

Now we can attempt to reproduce the paper. The original network trained for 3 days, but my (Apple Silicon M1) MacBook Air 33 years later chunks through it in about 90 seconds. (non-emulated arm64 but CPU only, I don't believe PyTorch and Apple M1 are best friends ever just yet, but anyway still about 3000X speedup). So now that we've run prepro we can run repro! (haha):

$ python repro.py

Running this prints (on the 23rd, final pass):

eval: split train. loss 4.073383e-03. error 0.62%. misses: 45
eval: split test . loss 2.838382e-02. error 4.09%. misses: 82

This is close but not quite the same as what the paper reports. To match the paper exactly we'd expect the following instead:

eval: split train. loss 2.5e-3. error 0.14%. misses: 10
eval: split test . loss 1.8e-2. error 5.00%. misses: 102

I expect that the majority of this discrepancy comes from the training dataset itself. We've only simulated the original dataset using what we have today 33 years later (MNIST). There are a number of other details that are not specified in the paper, so I also had to do some guessing (see notes below). For example, the specific sparse connectivity structure between layers H1 and H2 is not described, the paper just says that the inputs are "chosen according to a scheme that will not be dicussed here". Alternatively, the paper uses a "special version of Newton's algorithm that uses a positive, diagonal approximation of Hessian", but I only used simple SGD in this implementation because it is signficiantly simpler and, according to the paper, "this algorithm is not believed to bring a tremendous increase in learning speed". Anyway, we are getting numbers on similar orders of magnitude...

notes

My notes from the paper:

  • 7291 digits are used for training
  • 2007 digits are used for testing
  • each image is 16x16 pixels grayscale (not binary)
  • images are scaled to range [-1, 1]
  • network has three hidden layers H1 H2 H3
    • H1 is 5x5 stride 2 conv with 12 planes. constant padding of -1.
    • not "standard": units do not share biases! (including in the same feature plane)
    • H1 has 768 units (8*8*12), 19,968 connections (768*26), 1,068 parameters (768 biases + 25*12 weights)
    • not "standard": H2 units all draw input from 5x5 stride 2 conv but each only connecting to different 8 out of the 12 planes
    • H2 contains 192 units (4*4*12), 38,592 connections (192 units * 201 input lines), 2,592 parameters (12 * 200 weights + 192 biases)
    • H3 has 30 units fully connected to H2. So 5790 connections (30 * 192 + 30)
    • output layer has 10 units fully connected to H3. So 310 weights (30 * 10 + 10)
    • total: 1256 units, 64,660 connections, 9760 parameters
  • tanh activations on all units (including output units!)
    • weights of output chosen to be in quasi-linear regime
  • cost function: mean squared error
  • weight init: random values in U[-2.4/F, 2.4/F] where F is the fan-in. "tends to keep total inputs in operating range of sigmoid"
  • training
    • patterns presented in constant order
    • SGD on single example at a time
    • use special version of Newton's algorithm that uses a positive, diagonal approximation of Hessian
    • trained for 23 passes over data, measuring train+test error after each pass. total 167,693 presentations (23 * 7291)
    • final error: 2.5e-3 train, 1.8e-2 test
    • percent misclassification: 0.14% on train (10 mistakes), 5.0% on test (102 mistakes).
  • compute:
    • run on SUN-4/260 workstation
    • digital signal co-processor:
      • 256 kbytes of local memory
      • peak performance of 12.5M MAC/s on fp32 (ie 25MFLOPS)
    • trained for 3 days
    • throughput of 10-12 digits/s, "limited mainly by the normalization step"
    • throughput of 30 digits/s on normalized digits
  • "we have successfully applied backpropagation learning to a large, real-world task"

Open questions:

  • The 12 -> 8 connections from H2 to H1 are not described in this paper... I will assume a sensible block structure connectivity
  • Not clear what exactly is the "MSE loss". Was the scaling factor of 1/2 included to simplify the gradient calculation? Will assume no.
  • What is the learning rate? I will run a sweep to determine the best one manually.
  • Was any learning rate decay used? Not mentioned, I am assuming no.
  • Was any weight decay used? not mentioned, assuming no.
  • Is there a bug in the pdf where in weight init the fan in should have a square root? The pdf's formatting is a bit messed up. Assuming yes.
  • The paper does not say, but what exactly are the targets? Assuming they are +1/-1 for pos/neg, as the output units have tanh too...

One more notes on the weight init conundrum. Eg the "Kaiming init" is:

a = gain * sqrt(3 / fan_in)
~U(-a, a)

For tanh neurons the recommended gain is 5/3. So therefore we would have a = sqrt(3) * 5 / 3 * sqrt(1 / fan_in) = 2.89 * sqrt(1 / fan_in), which is close to what the paper does (gain 2.4). So if the original work in fact did use a sqrt and the pdf is just formatted wrong, then the (modern) Kaiming init and the originally used init are pretty close.

todos

  • modernize the network using knowledge from 33 years of time travel.
  • include my janky hyperparameter sweeping code for tuning the learning rate potentially

More Repositories

1

nanoGPT

The simplest, fastest repository for training/finetuning medium-sized GPTs.
Python
22,607
star
2

minGPT

A minimal PyTorch re-implementation of the OpenAI GPT (Generative Pretrained Transformer) training
Python
15,735
star
3

char-rnn

Multi-layer Recurrent Neural Networks (LSTM, GRU, RNN) for character-level language models in Torch
Lua
11,228
star
4

convnetjs

Deep Learning in Javascript. Train Convolutional Neural Networks (or ordinary ones) in your browser.
JavaScript
10,642
star
5

nn-zero-to-hero

Neural Networks: Zero to Hero
Jupyter Notebook
8,476
star
6

micrograd

A tiny scalar-valued autograd engine and a neural net library on top of it with PyTorch-like API
Jupyter Notebook
5,613
star
7

neuraltalk2

Efficient Image Captioning code in Torch, runs on GPU
Jupyter Notebook
5,426
star
8

neuraltalk

NeuralTalk is a Python+numpy project for learning Multimodal Recurrent Neural Networks that describe images with sentences.
Python
5,352
star
9

arxiv-sanity-preserver

Web interface for browsing, search and filtering recent arxiv submissions
Python
4,943
star
10

ng-video-lecture

Python
2,074
star
11

reinforcejs

Reinforcement Learning Agents in Javascript (Dynamic Programming, Temporal Difference, Deep Q-Learning, Stochastic/Deterministic Policy Gradients)
HTML
1,273
star
12

makemore

An autoregressive character-level language model for making more things
Python
1,217
star
13

cryptos

Pure Python from-scratch zero-dependency implementation of Bitcoin for educational purposes
Jupyter Notebook
1,142
star
14

randomfun

Notebooks and various random fun
Jupyter Notebook
996
star
15

ulogme

Automatically collect and visualize usage statistics in Ubuntu/OSX environments.
Python
941
star
16

recurrentjs

Deep Recurrent Neural Networks and LSTMs in Javascript. More generally also arbitrary expression graphs with automatic differentiation.
HTML
918
star
17

arxiv-sanity-lite

arxiv-sanity lite: tag arxiv papers of interest get recommendations of similar papers in a nice UI using SVMs over tfidf feature vectors based on paper abstracts.
Python
864
star
18

tsnejs

Implementation of t-SNE visualization algorithm in Javascript.
JavaScript
815
star
19

pytorch-normalizing-flows

Normalizing flows in PyTorch. Current intended use is education not production.
Jupyter Notebook
790
star
20

paper-notes

Random notes on papers, likely a short-term repo.
660
star
21

svmjs

Support Vector Machine in Javascript (SMO algorithm, supports arbitrary kernels) + GUI demo
JavaScript
636
star
22

pytorch-made

MADE (Masked Autoencoder Density Estimation) implementation in PyTorch
Python
510
star
23

karpathy.github.io

my blog
CSS
472
star
24

deep-vector-quantization

VQVAEs, GumbelSoftmaxes and friends
Jupyter Notebook
422
star
25

covid-sanity

Aspires to help the influx of bioRxiv / medRxiv papers on COVID-19
Python
351
star
26

find-birds

Find people you should follow on Twitter based on who the people you follow follow
Python
305
star
27

forestjs

Random Forest implementation for JavaScript. Supports arbitrary weak learners. Includes interactive demo.
JavaScript
284
star
28

researchlei

An Academic Papers Management and Discovery System
Python
194
star
29

Random-Forest-Matlab

A Random Forest implementation for MATLAB. Supports arbitrary weak learners that you can define.
MATLAB
172
star
30

researchpooler

Automating research publications discovery and analysis. For example, ever wish your computer could automatically open papers that are most similar to a paper at an arbitrary url? How about finding all papers that report results on some dataset? Let's re-imagine literature review.
Python
167
star
31

nipspreview

Scripts that generate .html to more easily see NIPS papers
Python
147
star
32

ttmik

Talk to me in Korean Anki cards and related scripts
Python
103
star
33

tf-agent

tensorflow reinforcement learning agents for OpenAI gym environments
Python
99
star
34

gitstats

A lightweight/pretty visualizer for recent work on a git code base in multiple branches. Helps stay up to date with teams working on one git repo in many branches.
HTML
85
star
35

EigenLibSVM

A wrapper for LibSVM that lets you train SVM's directly on Eigen library matrices in C++
C++
74
star
36

MatlabWrapper

C++ convenience class to communicate with a Matlab instance. Send matrices back and forth, execute arbitrary Matlab commands, or drop into interactive Matlab session right in the middle of your C++ code.
C++
52
star
37

twoolpy

useful scripts to work with Twitter + Python. Requires the tweepy library.
Python
50
star
38

notpygamejs

Game making library for using Canvas element
JavaScript
41
star
39

scholaroctopus

A set of tools/pages that help explore academic literature
33
star
40

karpathy

root repo
19
star