Status: Reference code only. No updates expected.
Attention RNNs in Keras
Implementation and visualization of a custom RNN layer with attention in Keras for translating dates.
This repository comes with a tutorial found here: https://medium.com/datalogue/attention-in-keras-1892773a4f22
Setting up the repository
-
Make sure you have Python 3.4+ installed.
-
Clone this repository to your local system
git clone https://github.com/datalogue/keras-attention.git
- Install the requirements (You can skip this step if you have all the requirements already installed)
We recommend using GPU's otherwise training might be prohbitively slow:
pip install -r requirements-gpu.txt
If you do not have a GPU or want to prototype on your local machine:
pip install -r requirements.txt
Creating the dataset
cd
into data
and run
python generate.py
This will create 4 files:
training.csv
- data to train the modelvalidation.csv
- data to evaluate the model and compare performancehuman_vocab.json
- vocabulary for the human datesmachine_vocab.json
- vocabulary for the machine dates
Running the model
We highly recommending having a machine with a GPU to run this software, otherwise training might be prohibitively slow. To see what arguments are accepted you can run python run.py -h
from the main directory:
usage: run.py [-h] [-e |] [-g |] [-p |] [-t |] [-v |] [-b |]
optional arguments:
-h, --help show this help message and exit
named arguments:
-e |, --epochs | Number of Epochs to Run
-g |, --gpu | GPU to use
-p |, --padding | Amount of padding to use
-t |, --training-data |
Location of training data
-v |, --validation-data |
Location of validation data
-b |, --batch-size | Location of validation data
All parameters have default values, so if you want to just run it, you can type python run.py
. You can always stop running the model early using Ctrl+C
.
Visualizing Attention
You can use the script visualize.py
to visualize the attention map. We have provided sample weights and vocabularies in data/
and weights/
so that this script can run automatically using just an example. Run with the -h
argument to see what is accepted:
usage: visualize.py [-h] -e | [-w |] [-p |] [-hv |] [-mv |]
optional arguments:
-h, --help show this help message and exit
named arguments:
-e |, --examples | Example string/file to visualize attention map for If
file, it must end with '.txt'
-w |, --weights | Location of weights
-p |, --padding | Length of padding
-hv |, --human-vocab |
Path to the human vocabulary
-mv |, --machine-vocab |
Path to the machine vocabulary
The default padding
parameters correspond between run.py
and visualize.py
and therefore, if you change this make sure to note it. You must supply the path to the weights you want to use and an example/file of examples. An example file is provided in examples.txt
.
Example visualizations
Here are some example visuals you can obtain:
The model has learned that βSaturdayβ has no predictive value!
We can see the weirdly formatted date βJanuary 2016 5β is incorrectly translated as 2016β01β02 where the β02β comes from the β20β in 2016
Help
Start an issue and we will do our best to help!
Acknowledgements
As with all open source code, we could not have built this without other code out there. Special thanks to:
- rasmusbergpalm/normalization - for some of the data generation code.
- joke2k/faker for their fake data generator.
References
Bahdanau, Dzmitry, Kyunghyun Cho, and Yoshua Bengio. "Neural machine translation by jointly learning to align and translate." arXiv preprint arXiv:1409.0473 (2014).