• Stars
    star
    697
  • Rank 64,937 (Top 2 %)
  • Language
    Jupyter Notebook
  • License
    MIT License
  • Created over 5 years ago
  • Updated almost 2 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Deep Learning Illustrated (2020)

Deep Learning Illustrated (2020)

This repository is home to the code that accompanies Jon Krohn, Grant Beyleveld and Aglaé Bassens' book Deep Learning Illustrated. This visual, interactive guide to artificial neural networks was published on Pearson's Addison-Wesley imprint.

Installation

Step-by-step guides for running the code in this repository can be found in the installation directory. For installation difficulties, please consider visiting our book's Q&A forum instead of creating an Issue.

Notebooks

All of the code covered in the book can be found in the notebooks directory as Jupyter notebooks.

Below is the book's table of contents with links to all of the individual notebooks.

Note that while TensorFlow 2.0 was released after the book had gone to press, as detailed in Chapter 14 (specifically, Example 14.1), all of our notebooks can be trivially converted into TensorFlow 2.x code if desired. Failing that, TensorFlow 2.x analogs of the notebooks in the current repo are available here.

Part 1: Introducing Deep Learning

Chapter 1: Biological and Machine Vision

  • Biological Vision
  • Machine Vision
    • The Neocognitron
    • LeNet-5
    • The Traditional Machine Learning Approach
    • ImageNet and the ILSVRC
    • AlexNet
  • TensorFlow PlayGround
  • The Quick, Draw! Game

Chapter 2: Human and Machine Language

  • Deep Learning for Natural Language Processing
    • Deep Learning Networks Learn Representations Automatically
    • A Brief History of Deep Learning for NLP
  • Computational Representations of Language
    • One-Hot Representations of Words
    • Word Vectors
    • Word Vector Arithmetic
    • word2viz
    • Localist Versus Distributed Representations
  • Elements of Natural Human Language
  • Google Duplex

Chapter 3: Machine Art

  • A Boozy All-Nighter
  • Arithmetic on Fake Human Faces
  • Style Transfer: Converting Photos into Monet (and Vice Versa)
  • Make Your Own Sketches Photorealistic
  • Creating Photorealistic Images from Text
  • Image Processing Using Deep Learning

Chapter 4: Game-Playing Machines

  • Deep Learning, AI, and Other Beasts
    • Artificial Intelligence
    • Machine Learning
    • Representation Learning
    • Artificial Neural Networks
  • Three Categories of Machine Learning Problems
    • Supervised Learning
    • Unsupervised Learning
    • Reinforcement Learning
  • Deep Reinforcement Learning
  • Video Games
  • Board Games
    • AlphaGo
    • AlphaGo Zero
    • AlphaZero
  • Manipulation of Objects
  • Popular Reinforcement Learning Environments
    • OpenAI Gym
    • DeepMind Lab
    • Unity ML-Agents
  • Three Categories of AI
    • Artificial Narrow Intelligence
    • Artificial General Intelligence
    • Artificial Super Intelligence

Part II: Essential Theory Illustrated

Chapter 5: The (Code) Cart Ahead of the (Theory) Horse

  • Prerequisites
  • Installation
  • A Shallow Neural Network in Keras (shallow_net_in_keras.ipynb)
    • The MNIST Handwritten Digits (mnist_digit_pixel_by_pixel.ipynb)
    • A Schematic Diagram of the Network
    • Loading the Data
    • Reformatting the Data
    • Designing a Neural Network Architecture
    • Training a Deep Learning Model

Chapter 6: Artificial Neurons Detecting Hot Dogs

  • Biological Neuroanatomy 101
  • The Perceptron
    • The Hot Dog / Not Hot Dog Detector
    • The Most Important Equation in the Book
  • Modern Neurons and Activation Functions
  • Choosing a Neuron

Chapter 7: Artificial Neural Networks

  • The Input Layer
  • Dense Layers
  • A Hot Dog-Detecting Dense Network
    • Forward Propagation through the First Hidden Layer
    • Forward Propagation through Subsequent Layers
  • The Softmax Layer of a Fast Food-Classifying Network (softmax_demo.ipynb)
  • Revisiting our Shallow Neural Network

Chapter 8: Training Deep Networks

Chapter 9: Improving Deep Networks

Part III: Interactive Applications of Deep Learning

Chapter 10: Machine Vision

  • Convolutional Neural Networks
    • The Two-Dimensional Structure of Visual Imagery
    • Computational Complexity
    • Convolutional Layers
    • Multiple Filters
    • A Convolutional Example
    • Convolutional Filter Hyperparameters
    • Stride Length
    • Padding
  • Pooling Layers
  • LeNet-5 in Keras (lenet_in_keras.ipynb)
  • AlexNet (alexnet_in_keras.ipynb) and VGGNet (vggnet_in_keras.ipynb)
  • Residual Networks
    • Vanishing Gradients: The Bête Noire of Deep CNNs
    • Residual Connection
  • Applications of Machine Vision

Chapter 11: Natural Language Processing

Chapter 12: Generative Adversarial Networks

Chapter 13: Deep Reinforcement Learning

  • Essential Theory of Reinforcement Learning
    • The Cart-Pole Game
    • Markov Decision Processes
    • The Optimal Policy
  • Essential Theory of Deep Q-Learning Networks
    • Value Functions
    • Q-Value Functions
    • Estimating an Optimal Q-Value
  • Defining a DQN Agent (cartpole_dqn.ipynb)
    • Initialization Parameters
    • Building the Agent’s Neural Network Model
    • Remembering Gameplay
    • Training via Memory Replay
    • Selecting an Action to Take
    • Saving and Loading Model Parameters
  • Interacting with an OpenAI Gym Environment
  • Hyperparameter Optimization with SLM Lab
  • Agents Beyond DQN
    • Policy Gradients and the REINFORCE Algorithm
    • The Actor-Critic Algorithm

Part IV: You and AI

Chapter 14: Moving Forward with Your Own Deep Learning Projects

  • Ideas for Deep Learning Projects
  • Resources for Further Projects
    • Socially-Beneficial Projects
  • The Modeling Process, including Hyperparameter Tuning
    • Automation of Hyperparameter Search
  • Deep Learning Libraries
  • Software 2.0
  • Approaching Artificial General Intelligence

Book Cover