This repository builds a sequence to sequence learning algorithm with attention mechanism, which aims to tackle some practical tasks such as simple dialogue, machine translation, pronounce to word and etc. This repo is implemented by tensorflow.
Before starting the experiment, you need to pull the data first (use cornell dataset as example):
$ cd dataset
$ bash down_cornell.sh
It will create a directory raw/cornell
, and the downloaded raw data will be stored under this directory.
Note: other .sh
data pullers will download and unzip data into raw/
folder as a sub-directory with a specific name.
Then go back tp the repository root, and execute the following commands to start a training or inference task:
$ cd ..
$ python3 cornell_dialogue.py --mode train # or decode if you have pretrained checkpoints
It will cleanup the dataset, create vocabularies + train/test dataset indices and save the processed data to dataset/data/cornell
directory (If the processed data already exists, will skip this process).
Then load the pre-setup configurations (you can change the configurations in the python file), and create the model and start a training session.
No preprocessed dataset found, create from cornell raw data...
Read cornell movie lines: 304713it [00:02, 128939.96it/s]
Read cornell movie conversations: 83097it [00:01, 46060.47it/s]
Create cornell utterance pairs: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████| 83097/83097 [01:02<00:00, 1319.20it/s]
Build vocabulary: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 158669/158669 [00:02<00:00, 77018.89it/s]
Build dataset: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 158669/158669 [00:01<00:00, 89873.72it/s]
Load configurations...
Load dataset and create batches...
Prepare train batches: 4711it [00:02, 2225.27it/s]
Prepare test batches: 248it [00:00, 3951.25it/s]
Building model...
source embedding shape: [None, None, 1024]
target input embedding shape: [None, None, 1024]
bi-directional rnn output shape: [None, None, 2048]
encoder input projection shape: [None, None, 1024]
encoder output shape: [None, None, 1024]
decoder rnn output shape: [None, None, 10004] (last dimension is vocab size)
number of trainable parameters: 78197524.
Start training...
Epoch 1 / 60:
1/4711 [..............................] - ETA: 1468s - Global Step: 1 - Train Loss: 9.2197 - Perplexity: 10094.0631
...
List of datasets that the mode of this repository is able to handle.
- Cornell Movie--Dialogs Corpus.
- Twitter Chat, borrowed from [marsan-ma/chat_corpus], with 700k lines tweets, where odd lines are tweets and even lines are responded tweets.
- CMU Pronouncing Dictionary.
- IWSLT 2012 MT Track dataset, English-French translation.
- IWSLT Evaluation 2016 MT Track dataset, English-French translation.
- Europarl dataset, English-French translation, reference: [How to Prepare a French-to-English Dataset for Machine Translation].
- Build basic model.
- Add Bahdanau and Luong attention.
- Add dropout wrapper
- Add residual wrapper.
- Add learning rate decay.
- Add different training optimizers.
- Add bidirectional rnn for encoder.
- Add sub-word module, ref: [BPE].
- Add GNMTAttentionMultiCell wrapper, ref: [Google’s Neural Machine Translation System: Bridging the Gap between Human and Machine Translation]. source: [tensorflow/nmt/nmt/gnmt_model.py].
- Add BLEU measurement.