• Stars
    star
    198
  • Rank 196,898 (Top 4 %)
  • Language OpenEdge ABL
  • License
    Apache License 2.0
  • Created over 6 years ago
  • Updated almost 6 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

PyTorch implementation for Interpretable Dialog Generation ACL 2018, It is released by Tiancheng Zhao (Tony) from Dialog Research Center, LTI, CMU

Interpretable Neural Dialog Generation via Discrete Sentence Representation Learning

Codebase for Unsupervised Discrete Sentence Representation Learning for Interpretable Neural Dialog Generation, published as a long paper in ACL 2018. You can find my presentation slides here.

If you use any source codes or datasets included in this toolkit in your work, please cite the following paper. The bibtex are listed below:

@article{zhao2018unsupervised,
  title={Unsupervised Discrete Sentence Representation Learning for Interpretable Neural Dialog Generation},
  author={Zhao, Tiancheng and Lee, Kyusong and Eskenazi, Maxine},
  journal={arXiv preprint arXiv:1804.08069},
  year={2018}
}

Requirements

python 2.7
pytorch >= 0.3.0.post4
numpy
nltk

Datasets

The data folder contains three datasets:

Run Models

The first two scripts are sentence models (DI-VAE/DI-VST) that learn discrete sentence representations from either auto-encoding or context-predicting.

Discrete Info Variational Autoencoder (DI-VAE)

The following command will train a DI-VAE on the PTB dataset. To run on different datasets, follows the pattern in PTB dataloader and corpus reader and implement your own data interface.

python ptb-utt.py

Discrete info Variational Skip-thought (DI-VST)

The following command will train a DI-VST on the Daily Dialog corpus.

python dailydialog-utt-skip.py

The next two train a latent-action encoder decoder with either DI-VAE or DI-VST.

DI-VAE + Encoder Decoder (AE-ED)

The following command will first train a DI-VAE on the Stanford multi domain dialog dataset, and then train a hierarchical encoder decoder (HRED) model with the latent code from the DI-VAE.

python stanford-ae.py

DI-VST + Encoder Decoder (ST-ED)

The following command will first train a DI-VST on the Stanford multi domain dialog dataset, and then train a hierarchical encoder decoder (HRED) model with the latent code from the DI-VST.

python stanford-skip.py

Change Configurations

Change model parameters

Generally all the parameters are defined at the top of each script. You can either passed a different value in the command line or change the default value of each parameters. Some key parameters are explained below:

  • y_size: the number of discrete latent variable
  • k: the number of classes for each discrete latent variable
  • use_reg_kl: whether or not use KL regulization on the latetn space. If False, the model becomes normal autoencoder or skip thought.
  • use_mutual: whether or not use Batch Prior Regulization (BPR) proposed in our work or the standard ELBO setup.

Extra essential parameters for LA-ED or ST-ED:

  • use_attribute: whether or not use the attribute forcing loss in Eq 10.
  • freeze_step: the number of batch we train DI-VAE/VST before we freeze latent action and training encoder-decoders.

Test a existing model

All trained models and log files are saved to the log folder. To run a existing model, you can:

  • Set the forward_only argument to be True
  • Set the load_sess argument to te the path to the model folder in log
  • Run the script

More Repositories

1

NeuralDialogPapers

Summary of deep learning models for dialog systems (Tiancheng Zhao LTI, CMU)
651
star
2

NeuralDialog-CVAE

Tensorflow Implementation of Knowledge-Guided CVAE for dialog generation ACL 2017. It is released by Tiancheng Zhao (Tony) from Dialog Research Center, LTI, CMU
OpenEdge ABL
309
star
3

NeuralDialog-LaRL

PyTorch implementation of latent space reinforcement learning for E2E dialog published at NAACL 2019. It is released by Tiancheng Zhao (Tony) from Dialog Research Center, LTI, CMU
Python
144
star
4

NeuralDialog-ZSDG

PyTorch codebase for zero-shot dialog generation SIGDIAL 2018, It is released by Tiancheng Zhao (Tony) from Dialog Research Center, LTI, CMU
Python
133
star
5

SimDial

Synthetic task-oriented dialog generator with controllable complexity. It is released by Tiancheng Zhao (Tony) from Dialog Research Center, LTI, CMU
Python
51
star
6

How-to-Utilize-Unstructured-Text

This blog describes one future path on how to use unstructured data for future text data apps.
7
star
7

tantivy-ztc

Modifications on tantivy to support neural index.
Rust
6
star
8

RnnLM

A baseline sequence-based LM
OpenEdge ABL
6
star
9

NeuralDialog-DM

Code for SIGDial 2016. Towards End-to-end RL for DST and DM
Jupyter Notebook
5
star
10

10-601PyTorchTutorial

Recitation notebook for Pytorch (Tiancheng Zhao)
Jupyter Notebook
4
star
11

TfLibrary

Python
2
star
12

clip_tome

Jupyter Notebook
2
star
13

NeuralImageCaption

11-777 Class Project
Python
2
star
14

mini_dpo

Python
2
star
15

HrlPy

A Python based HRL Library
Python
2
star
16

mini_vqvae

1
star
17

TemplateEditor

1
star