• Stars
    star
    139
  • Rank 262,954 (Top 6 %)
  • Language
    Jupyter Notebook
  • License
    Apache License 2.0
  • Created over 5 years ago
  • Updated almost 2 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Code for Machine Learning with TensorFlow: 2nd Edition Published by Manning Publications

Machine Learning with TensorFlow, 2nd Edition

This is the code repository for the 2nd edition of Manning Publications' Machine Learning with TensorFlow written by Chris Mattmann.

The code in this repository is mostly Jupyter Notebooks that correspond to the numbered listings in each chapter of the book. The code has beeen tested with TensorFlow 1.15.2 but there is a complete porting of the code in the book to TensorFlow 2.x.

We welcome contributions to the TF2 port and to all of the notebooks in TF1.15.x too!

Quick Start

The repository contains two fully functional Docker images. The first latest runs with TF1.15.x and tracks with the book examples. You can get going by simply running from a command prompt:

$ docker pull chrismattmann/mltf2:latest
$ ./run_environment.sh

This will pull the TF1.15.x image and start Juptyer running on localhost. Watch for the startup message, and then click through (including the token needed past the ? in the URL) to start Your Juptyer session.

To run the TF2.x version of the code and notebooks you can similarly run the tf2 tag:

$ docker pull chrismattmann/mltf2:tf2
$ ./run_TFv2_environment.sh

Follow the URL from the startup message.

Enjoy!

Pre-requisites

Though the book has TensorFlow in the name, the book is also just as machine about generalized machine learning and its theory, and the suite of frameworks that also come in handy when dealing with machine learning. The requirements for running the notebooks are below. You should PIP install them using your favorite Python. The examples from the book have been shown to work in Python 2.7, and Python 3.7. I didn't have time to test all of them but we are happy to receive PRs for things we've missed.

Additionally the Docker has been tested and on latest Docker for Mac only adds about 1.5% overhead on CPU mode and is totally usable and a one-shot easy installer for all of the dependencies. Browse the file to see what you'll need to install and how to run the code locally if desired.

  • TensorFlow
  • Jupyter
  • Pandas - for data frames and easy tabular data manipulation
  • NumPy, SciPy
  • Matplotlib
  • NLTK - for anything text or NLP (such as Sentiment Analysis from Chapter 6)
  • TQDM - for progress bars
  • SKLearn - for various helper functions
  • Bregman Toolkit (for audio examples in Chapter 7)
  • Tika
  • Ystockquote
  • Requests
  • OpenCV
  • Horovod - use 0.18.2 (or 0.18.1) for use with the Maverick2 VGG Face model.
  • VGG16 - grab vgg16.py and vgg16_weights.npz, imagenet_classes.py and laska.png - only works with Python2.7, place in the lib directory.
  • PyDub - for Chapter 17 in the LSTM chapter.
  • Basic Units - for use in Chapter 17. Place in libs/basic_units/ folder.
  • RNN-Tutorial - used in Chapter 17 to help implement the deep speech model and train it.

Data Requirements

You will generate lots of data when running the notebooks in particular building models. But to train and build those models you will also need data. I have created an easy DropBox folder for you to pull input data for use in training models from the book. Access the DropBox folder here.

Note that the Docker build described below automatically pulls down all the data for you and incorporates it into the Docker environment so that you don't have to download a thing.

The pointers below let you know what data you need for what chapters, and where to put it. Unless otherwise not specified, the data should be placed into the data folder. Note that as you are running the notebooks the notebooks will generate TF models and write them and checkpoint files to the models/ folder.

Data Input requirements

Chapter 4

  • data/311.csv

Chapter 6

  • data/word2vec-nlp-tutorial/labeledTrainData.tsv
  • data/word2vec-nlp-tutorial/testData.tsv
  • data/aclImdb/test/neg/
  • data/aclImdb/test/pos/

Chapter 7

  • data/audio_dataset/
  • data/TalkingMachinesPodcast.wav

Chapter 8

  • data/User Identification From Walking Activity/

Chapter 10

  • data/mobypos.txt

Chapter 12

  • data/cifar-10-batches-py
  • data/MNIST_data/ (if you try the MNIST extra example)

Chapter 14

  • data/cifar-10-batches-py

Chapter 15

  • data/cifar-10-batches-py
  • data/vgg_face_dataset - The VGG face metadata including Celeb Names
  • data/vgg-face - The actual VGG face data
  • data/vgg_face_full_urls.csv - Metadata informmation about VGG Face URLs
  • data/vgg_face_full.csv - Metadata information about all VGG Face data
  • data/vgg-models/checkpoints-1e3x4-2e4-09202019 - To run the VGG Face Estimator additional example
  • models/vgg_face_weights.h5 - To run the VGG Face verification additional example

Chapter 16

  • data/international-airline-passengers.csv

Chapter 17

  • data/LibriSpeech
  • libs/basic_units/
  • libs/RNN-Tutorial/

Chapter 18

  • data/seq2seq

Chapter 19

  • libs/vgg16/laska.png
  • data/cloth_folding_rgb_vids

Setting up the environment (Tested on Mac and Linux)

Using Docker

Building the image

# Only builds a Docker compatible with GPU and CPU.
./build_environment.sh #TensorFlow1
./build_TFv2_environment.sh #TensorFlow2

Running the notebook from docker

# Runs in GPU and CPU mode and will look for NVIDIA drivers first and fall back to reg CPU.
./run_environment.sh #TensorFlow1
./run_TFv2_environment.sh # TensorFlow2

Using a GPU

You need to install nvidia-docker to use your GPU in docker. Follow these instructions (also on the linked page)

# Add the package repositories
distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add -
curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list

sudo apt-get update && sudo apt-get install -y nvidia-container-toolkit
sudo systemctl restart docker

Using your local python

Building the environment

If you want to build with your existing Python that's fine you will need a Python2.7 for some of the chapters noted above (like chapter7 which uses BregmanToolkit), and python 3.7 for everything else. The requirements.txt file is different for each, so watch while one to pip install below.

#Python3.7 - GPU and CPU
$ pip3.7 install -r requirements.txt

#Python3.7 - TensorFlow2 GPU and CPU
$ pip3.7 install -r requirements-tf2.txt

#Python2.7 - CPU
$ pip2.7 install -r requirements-py2.txt

#Python2.7 - GPU
$ pip2.7 install -r requirements-gpu-py2.txt

Running the notebook from your local environment

$ jupyter notebook

Questions, comments?

Send them to Chris A. Mattmann. Also please consider heading over to the livebook forum where you can discuss the book with other readers and the author too.

Contributors

  • Chris A. Mattmann
  • Rob Royce (tensorflow2 branch)
  • Philip Southam (Dockerfile build in docker branch)

License

Apache License, version 2

More Repositories

1

tika-python

Tika-Python is a Python binding to the Apache Tikaâ„¢ REST services allowing Tika to be called natively in the Python community.
Python
1,465
star
2

tika-similarity

Tika-Similarity uses the Tika-Python package (Python port of Apache Tika) to compute file similarity based on Metadata features.
Python
106
star
3

imagecat

ImageCat is an Apache OODT RADIX application that uses Apache Solr, Apache Tika and Apache OODT to ingest 10s of millions of files (images,but could be extended to other files) in place, and to extract metadata and OCR information from those files/images using Tika and Tesseract OCR.
Java
94
star
4

lucene-geo-gazetteer

Uses Apache Lucene, OpenNLP and geonames and extracts locations from text and geocodes them.
Java
36
star
5

nutch-python

Nutch-Python is a Python binding to the Apache Nutch™ REST services allowing Nutch to be called natively in the Python community. — Edit
Python
35
star
6

etllib

This is the ETL lib package. It provides an API to munge and prepare JSON, TSV and other data using Apache Tika and JSON parsing/loading for ETL via Apache OODT (or other libs) into Apache Solr.
Python
16
star
7

solrcene

Spatial Branch of Apache Solr
Java
13
star
8

trec-dd-polar

A dataset downloaded from the deep and scientific web across three major Polar data centers for use in research.
Shell
13
star
9

shangridocs

Document exploration tool
JavaScript
12
star
10

drat

The Distributed Release Audit Tool (DRAT) for code analysis and verification.
JavaScript
8
star
11

politics-hacking

Scripts to process & analyze web data regarding politics.
Python
6
star
12

apachestuff

Python
6
star
13

DCGAN-AnimeFaces

Jupyter Notebook
5
star
14

NSFDataVizHackathon-2014

Shell
5
star
15

DCGAN-Dog-Generator

Jupyter Notebook
4
star
16

disco

Data Intensive Software Connectors
Java
4
star
17

deeplearning-udacity

Chris's assignments from DeepLearning class on udacity.
Jupyter Notebook
3
star
18

ctakesparser-utils

Shell
3
star
19

grobidparser-resources

Shell
3
star
20

bigtranslate

An Apache OODT, Apache Tika, and Apache Solr based system to automatically take large TSV file datasets, and to translate them from one language to another. Built and inspired by the DARPA XDATA Employment dataset.
Shell
3
star
21

geotopicparser-utils

2
star
22

memex-autonomy

Python
2
star
23

HyspIRI

Shell
2
star
24

ace

Automated Concept Extraction from Search Engines
Java
2
star
25

apple

Automatic precondition, convert and publish remote sending data to the ESGF.
Java
2
star
26

memex-weapons

CSS
1
star
27

earthcube

Shell
1
star
28

oodt-pushpull-plugins

Java
1
star
29

COVID19-text

Jupyter Notebook
1
star
30

smartcontracts

JavaScript
1
star
31

labkey-dumper

Java
1
star
32

maars-search

1
star
33

videocat

1
star