• Stars
    star
    8,476
  • Rank 4,324 (Top 0.09 %)
  • Language
    Jupyter Notebook
  • License
    MIT License
  • Created about 2 years ago
  • Updated over 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Neural Networks: Zero to Hero

Neural Networks: Zero to Hero

A course on neural networks that starts all the way at the basics. The course is a series of YouTube videos where we code and train neural networks together. The Jupyter notebooks we build in the videos are then captured here inside the lectures directory. Every lecture also has a set of exercises included in the video description. (This may grow into something more respectable).


Lecture 1: The spelled-out intro to neural networks and backpropagation: building micrograd

Backpropagation and training of neural networks. Assumes basic knowledge of Python and a vague recollection of calculus from high school.


Lecture 2: The spelled-out intro to language modeling: building makemore

We implement a bigram character-level language model, which we will further complexify in followup videos into a modern Transformer language model, like GPT. In this video, the focus is on (1) introducing torch.Tensor and its subtleties and use in efficiently evaluating neural networks and (2) the overall framework of language modeling that includes model training, sampling, and the evaluation of a loss (e.g. the negative log likelihood for classification).


Lecture 3: Building makemore Part 2: MLP

We implement a multilayer perceptron (MLP) character-level language model. In this video we also introduce many basics of machine learning (e.g. model training, learning rate tuning, hyperparameters, evaluation, train/dev/test splits, under/overfitting, etc.).


Lecture 4: Building makemore Part 3: Activations & Gradients, BatchNorm

We dive into some of the internals of MLPs with multiple layers and scrutinize the statistics of the forward pass activations, backward pass gradients, and some of the pitfalls when they are improperly scaled. We also look at the typical diagnostic tools and visualizations you'd want to use to understand the health of your deep network. We learn why training deep neural nets can be fragile and introduce the first modern innovation that made doing so much easier: Batch Normalization. Residual connections and the Adam optimizer remain notable todos for later video.


Lecture 5: Building makemore Part 4: Becoming a Backprop Ninja

We take the 2-layer MLP (with BatchNorm) from the previous video and backpropagate through it manually without using PyTorch autograd's loss.backward(). That is, we backprop through the cross entropy loss, 2nd linear layer, tanh, batchnorm, 1st linear layer, and the embedding table. Along the way, we get an intuitive understanding about how gradients flow backwards through the compute graph and on the level of efficient Tensors, not just individual scalars like in micrograd. This helps build competence and intuition around how neural nets are optimized and sets you up to more confidently innovate on and debug modern neural networks.

I recommend you work through the exercise yourself but work with it in tandem and whenever you are stuck unpause the video and see me give away the answer. This video is not super intended to be simply watched. The exercise is here as a Google Colab. Good luck :)


Lecture 6: Building makemore Part 5: Building WaveNet

We take the 2-layer MLP from previous video and make it deeper with a tree-like structure, arriving at a convolutional neural network architecture similar to the WaveNet (2016) from DeepMind. In the WaveNet paper, the same hierarchical architecture is implemented more efficiently using causal dilated convolutions (not yet covered). Along the way we get a better sense of torch.nn and what it is and how it works under the hood, and what a typical deep learning development process looks like (a lot of reading of documentation, keeping track of multidimensional tensor shapes, moving between jupyter notebooks and repository code, ...).


Lecture 7: Let's build GPT: from scratch, in code, spelled out.

We build a Generatively Pretrained Transformer (GPT), following the paper "Attention is All You Need" and OpenAI's GPT-2 / GPT-3. We talk about connections to ChatGPT, which has taken the world by storm. We watch GitHub Copilot, itself a GPT, help us write a GPT (meta :D!) . I recommend people watch the earlier makemore videos to get comfortable with the autoregressive language modeling framework and basics of tensors and PyTorch nn, which we take for granted in this video.


Ongoing...

License

MIT

More Repositories

1

nanoGPT

The simplest, fastest repository for training/finetuning medium-sized GPTs.
Python
22,607
star
2

minGPT

A minimal PyTorch re-implementation of the OpenAI GPT (Generative Pretrained Transformer) training
Python
15,735
star
3

char-rnn

Multi-layer Recurrent Neural Networks (LSTM, GRU, RNN) for character-level language models in Torch
Lua
11,228
star
4

convnetjs

Deep Learning in Javascript. Train Convolutional Neural Networks (or ordinary ones) in your browser.
JavaScript
10,642
star
5

micrograd

A tiny scalar-valued autograd engine and a neural net library on top of it with PyTorch-like API
Jupyter Notebook
5,613
star
6

neuraltalk2

Efficient Image Captioning code in Torch, runs on GPU
Jupyter Notebook
5,426
star
7

neuraltalk

NeuralTalk is a Python+numpy project for learning Multimodal Recurrent Neural Networks that describe images with sentences.
Python
5,352
star
8

arxiv-sanity-preserver

Web interface for browsing, search and filtering recent arxiv submissions
Python
4,943
star
9

ng-video-lecture

Python
2,074
star
10

reinforcejs

Reinforcement Learning Agents in Javascript (Dynamic Programming, Temporal Difference, Deep Q-Learning, Stochastic/Deterministic Policy Gradients)
HTML
1,273
star
11

makemore

An autoregressive character-level language model for making more things
Python
1,217
star
12

cryptos

Pure Python from-scratch zero-dependency implementation of Bitcoin for educational purposes
Jupyter Notebook
1,142
star
13

randomfun

Notebooks and various random fun
Jupyter Notebook
996
star
14

ulogme

Automatically collect and visualize usage statistics in Ubuntu/OSX environments.
Python
941
star
15

recurrentjs

Deep Recurrent Neural Networks and LSTMs in Javascript. More generally also arbitrary expression graphs with automatic differentiation.
HTML
918
star
16

arxiv-sanity-lite

arxiv-sanity lite: tag arxiv papers of interest get recommendations of similar papers in a nice UI using SVMs over tfidf feature vectors based on paper abstracts.
Python
864
star
17

tsnejs

Implementation of t-SNE visualization algorithm in Javascript.
JavaScript
815
star
18

pytorch-normalizing-flows

Normalizing flows in PyTorch. Current intended use is education not production.
Jupyter Notebook
790
star
19

paper-notes

Random notes on papers, likely a short-term repo.
660
star
20

svmjs

Support Vector Machine in Javascript (SMO algorithm, supports arbitrary kernels) + GUI demo
JavaScript
636
star
21

pytorch-made

MADE (Masked Autoencoder Density Estimation) implementation in PyTorch
Python
510
star
22

karpathy.github.io

my blog
CSS
472
star
23

lecun1989-repro

Reproducing Yann LeCun 1989 paper "Backpropagation Applied to Handwritten Zip Code Recognition", to my knowledge the earliest real-world application of a neural net trained with backpropagation.
Jupyter Notebook
425
star
24

deep-vector-quantization

VQVAEs, GumbelSoftmaxes and friends
Jupyter Notebook
422
star
25

covid-sanity

Aspires to help the influx of bioRxiv / medRxiv papers on COVID-19
Python
351
star
26

find-birds

Find people you should follow on Twitter based on who the people you follow follow
Python
305
star
27

forestjs

Random Forest implementation for JavaScript. Supports arbitrary weak learners. Includes interactive demo.
JavaScript
284
star
28

researchlei

An Academic Papers Management and Discovery System
Python
194
star
29

Random-Forest-Matlab

A Random Forest implementation for MATLAB. Supports arbitrary weak learners that you can define.
MATLAB
172
star
30

researchpooler

Automating research publications discovery and analysis. For example, ever wish your computer could automatically open papers that are most similar to a paper at an arbitrary url? How about finding all papers that report results on some dataset? Let's re-imagine literature review.
Python
167
star
31

nipspreview

Scripts that generate .html to more easily see NIPS papers
Python
147
star
32

ttmik

Talk to me in Korean Anki cards and related scripts
Python
103
star
33

tf-agent

tensorflow reinforcement learning agents for OpenAI gym environments
Python
99
star
34

gitstats

A lightweight/pretty visualizer for recent work on a git code base in multiple branches. Helps stay up to date with teams working on one git repo in many branches.
HTML
85
star
35

EigenLibSVM

A wrapper for LibSVM that lets you train SVM's directly on Eigen library matrices in C++
C++
74
star
36

MatlabWrapper

C++ convenience class to communicate with a Matlab instance. Send matrices back and forth, execute arbitrary Matlab commands, or drop into interactive Matlab session right in the middle of your C++ code.
C++
52
star
37

twoolpy

useful scripts to work with Twitter + Python. Requires the tweepy library.
Python
50
star
38

notpygamejs

Game making library for using Canvas element
JavaScript
41
star
39

scholaroctopus

A set of tools/pages that help explore academic literature
33
star
40

karpathy

root repo
19
star