• Stars
    star
    5,139
  • Rank 7,722 (Top 0.2 %)
  • Language
    Jupyter Notebook
  • License
    MIT License
  • Created almost 6 years ago
  • Updated 4 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Tutorials on implementing a few sequence-to-sequence (seq2seq) models with PyTorch and TorchText.

PyTorch Seq2Seq

Note: This repo only works with torchtext 0.9 or above which requires PyTorch 1.8 or above. If you are using torchtext 0.8 then please use this branch

This repo contains tutorials covering understanding and implementing sequence-to-sequence (seq2seq) models using PyTorch 1.8, torchtext 0.9 and spaCy 3.0, using Python 3.8.

If you find any mistakes or disagree with any of the explanations, please do not hesitate to submit an issue. I welcome any feedback, positive or negative!

Getting Started

To install PyTorch, see installation instructions on the PyTorch website.

To install torchtext:

pip install torchtext

We'll also make use of spaCy to tokenize our data. To install spaCy, follow the instructions here making sure to install both the English and German models with:

python -m spacy download en_core_web_sm
python -m spacy download de_core_news_sm

Tutorials

  • 1 - Sequence to Sequence Learning with Neural Networks Open In Colab

    This first tutorial covers the workflow of a PyTorch with torchtext seq2seq project. We'll cover the basics of seq2seq networks using encoder-decoder models, how to implement these models in PyTorch, and how to use torchtext to do all of the heavy lifting with regards to text processing. The model itself will be based off an implementation of Sequence to Sequence Learning with Neural Networks, which uses multi-layer LSTMs.

  • 2 - Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation Open In Colab

    Now we have the basic workflow covered, this tutorial will focus on improving our results. Building on our knowledge of PyTorch and torchtext gained from the previous tutorial, we'll cover a second second model, which helps with the information compression problem faced by encoder-decoder models. This model will be based off an implementation of Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation, which uses GRUs.

  • 3 - Neural Machine Translation by Jointly Learning to Align and Translate Open In Colab

    Next, we learn about attention by implementing Neural Machine Translation by Jointly Learning to Align and Translate. This further allievates the information compression problem by allowing the decoder to "look back" at the input sentence by creating context vectors that are weighted sums of the encoder hidden states. The weights for this weighted sum are calculated via an attention mechanism, where the decoder learns to pay attention to the most relevant words in the input sentence.

  • 4 - Packed Padded Sequences, Masking, Inference and BLEU Open In Colab

    In this notebook, we will improve the previous model architecture by adding packed padded sequences and masking. These are two methods commonly used in NLP. Packed padded sequences allow us to only process the non-padded elements of our input sentence with our RNN. Masking is used to force the model to ignore certain elements we do not want it to look at, such as attention over padded elements. Together, these give us a small performance boost. We also cover a very basic way of using the model for inference, allowing us to get translations for any sentence we want to give to the model and how we can view the attention values over the source sequence for those translations. Finally, we show how to calculate the BLEU metric from our translations.

  • 5 - Convolutional Sequence to Sequence Learning Open In Colab

    We finally move away from RNN based models and implement a fully convolutional model. One of the downsides of RNNs is that they are sequential. That is, before a word is processed by the RNN, all previous words must also be processed. Convolutional models can be fully parallelized, which allow them to be trained much quicker. We will be implementing the Convolutional Sequence to Sequence model, which uses multiple convolutional layers in both the encoder and decoder, with an attention mechanism between them.

  • 6 - Attention Is All You Need Open In Colab

    Continuing with the non-RNN based models, we implement the Transformer model from Attention Is All You Need. This model is based soley on attention mechanisms and introduces Multi-Head Attention. The encoder and decoder are made of multiple layers, with each layer consisting of Multi-Head Attention and Positionwise Feedforward sublayers. This model is currently used in many state-of-the-art sequence-to-sequence and transfer learning tasks.

References

Here are some things I looked at while making these tutorials. Some of it may be out of date.

More Repositories

1

pytorch-sentiment-analysis

Tutorials on getting started with PyTorch and TorchText for sentiment analysis.
Jupyter Notebook
4,213
star
2

pytorch-image-classification

Tutorials on how to implement a few key architectures for image classification using PyTorch and TorchVision.
Jupyter Notebook
909
star
3

pytorch-rl

Tutorials for reinforcement learning in PyTorch and Gym by implementing a few of the popular algorithms. [IN PROGRESS]
Jupyter Notebook
251
star
4

pytorch-pos-tagging

A tutorial on how to implement models for part-of-speech tagging using PyTorch and TorchText.
Jupyter Notebook
176
star
5

a-tour-of-pytorch-optimizers

A tour of different optimization algorithms in PyTorch.
Jupyter Notebook
77
star
6

machine-learning-courses

A collection of machine learning courses.
36
star
7

code2vec

A PyTorch implementation of `code2vec: Learning Distributed Representations of Code` (Alon et al., 2018)
Python
33
star
8

pytorch-generative-models

[IN PROGRESS] An introduction to generative adversarial networks (GANs) and variational autoencoders (VAEs) in PyTorch, by implementing a few key architectures.
Jupyter Notebook
29
star
9

pytorch-nli

A tutorial on how to implement models for natural language inference using PyTorch and TorchText. [IN PROGRESS]
Jupyter Notebook
25
star
10

pytorch-language-modeling

Jupyter Notebook
13
star
11

extreme-summarization-of-source-code

Implementation of 'A Convolutional Attention Network for Extreme Summarization of Source Code' in PyTorch using TorchText
Python
13
star
12

pytorch-text-classification

Jupyter Notebook
13
star
13

notes

Python
12
star
14

gradient-descent

Let's learn gradient descent by using linear regression, logistic regression and neural networks!
Jupyter Notebook
11
star
15

pytorch-neural-style-transfer

Python
11
star
16

pytorch-for-code

Using PyTorch to apply machine learning techniques to source code.
Python
10
star
17

pytorch-transfer-learning

Python
9
star
18

pytorch-practice

Jupyter Notebook
8
star
19

bag-of-tricks-for-efficient-text-classification

Implementation of 'Bag of Tricks for Efficient Text Classification' in PyTorch using TorchText
Python
8
star
20

pytorch-dqn

An implementation of various flavours of deep Q-learning (DQN) in PyTorch.
Jupyter Notebook
7
star
21

recurrent-attention-model

Python
7
star
22

paper-notes

n'th attempt at keeping note of papers I have read
6
star
23

lexisearch

Use semantic similarity models to query transcriptions from the Lex Fridman Podcast.
Python
6
star
24

CodeSearchNet

Python
4
star
25

relation-networks

Implementation of the bAbi task from A simple neural network module for relational reasoning in PyTorch using TorchText.
Python
3
star
26

variational-autoencoders

Jupyter Notebook
3
star
27

snli

https://nlp.stanford.edu/projects/snli/
Python
3
star
28

go-practice

Go
2
star
29

Glucoduino

Project to read data from glucometers using the Arduino platform
C++
2
star
30

bentrevett.github.io

My personal website to act as a portfolio
HTML
2
star
31

character-aware-neural-language-models

Implementation of 'Character-Aware Neural Language Models' in PyTorch using TorchText
Python
2
star
32

attributed-document-qa

Python
2
star
33

wordle-terminal

Wordle in the terminal.
Python
1
star
34

art

Markov chain to generate "art"
Python
1
star
35

sorting-algorithms

Implementation of sorting algorithms, with visualizations.
1
star
36

py-algorithms

Implementation of various algorithms in Python 3.
Jupyter Notebook
1
star
37

keepnote

Google Chrome note taking extension
JavaScript
1
star
38

Glucoduino-Classic-Bluetooth-Application

Android application for glucoduino project using standard Bluetooth
Java
1
star
39

bentrevett

1
star
40

brainfuck-python

A brainfuck interpreter in Python 3.
Brainfuck
1
star
41

numberworld

A toy environment for task-oriented language grounding.
Python
1
star
42

Glucoduino-CSR-Chip

Code for the CSR uEnergy SDK for the glucoduino project
C
1
star