• Stars
    star
    4,126
  • Rank 9,992 (Top 0.3 %)
  • Language
    Python
  • License
    Apache License 2.0
  • Created almost 7 years ago
  • Updated 11 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

A TensorFlow Implementation of the Transformer: Attention Is All You Need

[UPDATED] A TensorFlow Implementation of Attention Is All You Need

When I opened this repository in 2017, there was no official code yet. I tried to implement the paper as I understood, but to no surprise it had several bugs. I realized them mostly thanks to people who issued here, so I'm very grateful to all of them. Though there is the official implementation as well as several other unofficial github repos, I decided to update my own one. This update focuses on:

  • readable / understandable code writing
  • modularization (but not too much)
  • revising known bugs. (masking, positional encoding, ...)
  • updating to TF1.12. (tf.data, ...)
  • adding some missing components (bpe, shared weight matrix, ...)
  • including useful comments in the code.

I still stick to IWSLT 2016 de-en. I guess if you'd like to test on a big data such as WMT, you would rely on the official implementation. After all, it's pleasant to check quickly if your model works. The initial code for TF1.2 is moved to the tf1.2_lecacy folder for the record.

Requirements

  • python==3.x (Let's move on to python 3 if you still use python 2)
  • tensorflow==1.12.0
  • numpy>=1.15.4
  • sentencepiece==0.1.8
  • tqdm>=4.28.1

Training

bash download.sh

It should be extracted to iwslt2016/de-en folder automatically.

  • STEP 2. Run the command below to create preprocessed train/eval/test data.
python prepro.py

If you want to change the vocabulary size (default:32000), do this.

python prepro.py --vocab_size 8000

It should create two folders iwslt2016/prepro and iwslt2016/segmented.

  • STEP 3. Run the following command.
python train.py

Check hparams.py to see which parameters are possible. For example,

python train.py --logdir myLog --batch_size 256 --dropout_rate 0.5
  • STEP 3. Or download the pretrained models.
wget https://dl.dropbox.com/s/4lom1czy5xfzr4q/log.zip; unzip log.zip; rm log.zip

Training Loss Curve

Learning rate

Bleu score on devset

Inference (=test)

  • Run
python test.py --ckpt log/1/iwslt2016_E19L2.64-29146 (OR yourCkptFile OR yourCkptFileDirectory)

Results

  • Typically, machine translation is evaluated with Bleu score.
  • All evaluation results are available in eval/1 and test/1.
tst2013 (dev) tst2014 (test)
28.06 23.88

Notes

  • Beam decoding will be added soon.
  • I'm going to update the code when TF2.0 comes out if possible.

More Repositories

1

nlp_tasks

Natural Language Processing Tasks and References
3,018
star
2

wordvectors

Pre-trained word vectors of 30+ languages
Python
2,199
star
3

tacotron

A TensorFlow Implementation of Tacotron: A Fully End-to-End Text-To-Speech Synthesis Model
Python
1,818
star
4

numpy_exercises

Numpy exercises.
Python
1,672
star
5

dc_tts

A TensorFlow Implementation of DC-TTS: yet another text-to-speech model
Python
1,147
star
6

sudoku

Can Neural Networks Crack Sudoku?
Python
821
star
7

g2p

g2p: English Grapheme To Phoneme Conversion
Python
734
star
8

tensorflow-exercises

TensorFlow Exercises - focusing on the comparison with NumPy.
Python
535
star
9

deepvoice3

Tensorflow Implementation of Deep Voice 3
Python
452
star
10

css10

CSS10: A Collection of Single Speaker Speech Datasets for 10 Languages
HTML
440
star
11

neural_chinese_transliterator

Can CNNs transliterate Pinyin into Chinese characters correctly?
Python
330
star
12

pytorch_exercises

Jupyter Notebook
312
star
13

bert_ner

Ner with Bert
Python
278
star
14

word_prediction

Word Prediction using Convolutional Neural Networks
Python
251
star
15

nlp_made_easy

Explains nlp building blocks in a simple manner.
Jupyter Notebook
247
star
16

g2pC

g2pC: A Context-aware Grapheme-to-Phoneme Conversion module for Chinese
Python
231
star
17

g2pK

g2pK: g2p module for Korean
Python
216
star
18

expressive_tacotron

Tensorflow Implementation of Expressive Tacotron
Python
196
star
19

speaker_adapted_tts

Making a TTS model with 1 minute of speech samples within 10 minutes
184
star
20

neural_japanese_transliterator

Can neural networks transliterate Romaji into Japanese correctly?
Python
173
star
21

tacotron_asr

Speech Recognition Using Tacotron
Python
165
star
22

quasi-rnn

Character-level Neural Translation using Quasi-RNNs
Python
134
star
23

label_smoothing

Corrupted labels and label smoothing
Jupyter Notebook
127
star
24

bert-token-embeddings

Jupyter Notebook
97
star
25

mtp

Multi-lingual Text Processing
95
star
26

cross_vc

Cross-lingual Voice Conversion
Python
94
star
27

name2nat

name2nat: a Python package for nationality prediction from a name
Python
89
star
28

pron_dictionaries

pronunciation dictionaries for multiple languages
Python
79
star
29

msg_reply

a simple message reply suggestion system
Python
78
star
30

word_ordering

Can neural networks order a scramble of words correctly?
Python
74
star
31

kss

Python
70
star
32

neural_tokenizer

Tokenize English sentences using neural networks.
Python
64
star
33

bytenet_translation

A TensorFlow Implementation of Machine Translation In Neural Machine Translation in Linear Time
Python
60
star
34

KoParadigm

KoParadigm: Korean Inflectional Paradigm Generator
Python
54
star
35

specAugment

Tensor2tensor experiment with SpecAugment
Python
46
star
36

vq-vae

A Tensorflow Implementation of VQ-VAE Speaker Conversion
Python
43
star
37

lm_finetuning

Language Model Fine-tuning for Moby Dick
Python
42
star
38

texture_generation

An Implementation of 'Texture Synthesis Using Convolutional Neural Networks' with Kylberg Texture Dataset
Python
33
star
39

integer_sequence_learning

RNN Approaches to Integer Sequence Learning--the famous Kaggle competition
Python
27
star
40

cjk_trans

Pre-trained Machine Translation Models of Korean from/to ECJ
27
star
41

h2h_converter

Convert Sino-Korean words written in Hangul to Chinese characters, which is called hanja in Korean, using neural networks
Python
25
star
42

up_and_running_with_Tensorflow

A simple tutorial of TensorFlow + TensorFlow / NumPy exercises
Jupyter Notebook
13
star
43

neurobind

Yet Another Model Using Neural Networks for Predicting Binding Preferences of for Test DNA Sequences
Python
11
star
44

kollocate

Collocation Search of Korean
Python
9
star
45

kyubyong

9
star
46

WhereAmI

Where Am I? - If you want to meet me.
5
star
47

spam_detection

Spam Dectection Under Semi-supervised settings
5
star
48

helo_word

A Neural Grammatical Error Correction System Built On Better Pre-training and Sequential Transfer Learning
Python
2
star