TensorFlow (Keras) Attention Layer for RNN based models
Version (s)
- TensorFlow: 2.9.1 (Tested)
- TensorFlow: 1.15.0 (Soon to be deprecated)
Introduction
This is an implementation of Attention (only supports Bahdanau Attention right now)
Project structure
data (Download data and place it here)
|--- small_vocab_en.txt
|--- small_vocab_fr.txt
src
|--- layers
|--- attention.py (Attention implementation)
|--- examples
|--- nmt
|--- model.py (NMT model defined with Attention)
|--- train.py ( Code for training/inferring/plotting attention with NMT model)
|--- nmt_bidirectional
|--- model.py (NMT birectional model defined with Attention)
|--- train.py ( Code for training/inferring/plotting attention with NMT model)
How to use
Just like you would use any other tensoflow.python.keras.layers
object.
from attention_keras.src.layers.attention import AttentionLayer
attn_layer = AttentionLayer(name='attention_layer')
attn_out, attn_states = attn_layer([encoder_outputs, decoder_outputs])
Here,
encoder_outputs
- Sequence of encoder ouptputs returned by the RNN/LSTM/GRU (i.e. withreturn_sequences=True
)decoder_outputs
- The above for the decoderattn_out
- Output context vector sequence for the decoder. This is to be concat with the output of decoder (refermodel/nmt.py
for more details)attn_states
- Energy values if you like to generate the heat map of attention (refermodel.train_nmt.py
for usage)
Visualizing Attention weights
An example of attention weights can be seen in model.train_nmt.py
After the model trained attention result should look like below.
Running the NMT example
Prerequisites
- In order to run the example you need to download
small_vocab_en.txt
andsmall_vocab_fr.txt
from Udacity deep learning repository and place them in thedata
folder.
Using the docker image
- If you would like to run this in the docker environment, simply running
run.sh
will take you inside the docker container. - E.g. usage
run.sh -v <TF_VERSION> [-g]
-v
specifies the TensorFlow version (defaults tolatest
)-g
if specified use the GPU compatible Docker image
Using a virtual environment
- If you would like to use a virtual environment, first create and activate the virtual environment.
- Then, use either
pip install -r requirements.txt -r requirements_tf_cpu.txt
(For CPU)pip install -r requirements.txt -r requirements_tf_gpu.txt
(For GPU)
Running the code
- Go to the . Any example you run, you should run from the folder (the main folder). Otherwise, you will run into problems with finding/writing data.
- Run
python3 src/examples/nmt/train.py
. Setdegug=True
if you need to run simple and faster. - If run successfully, you should have models saved in the model dir and
attention.png
in theresults
dir.
If you would like to show support
If you'd like to show your appreciation you can buy me a coffee. No stress! It's totally optional. The support I recieved would definitely an added benefit to maintain the repository and continue on my other contributions.
If you have improvements (e.g. other attention mechanisms), contributions are welcome!