This repo implements the paper -- Improving Zero-shot Translation of Low-resource Languages.
Before the experimental setup, if you are wondering what type of MT problem we are approaching, take into consideration the following scenario:
- For languages
X, Y, P
, parallel training data is available only forX-P
andY-P
pairs - This allows to train a multiligual model with four translation directions.
- At time of inference, however, you can attempt to translate between the
X-Y
pair -- also known as Zero-Shot Translation (ZST).
Given the large majority of language pairs lack parallel data, ZST becomes a super exciting approach, especially if the translations are usable. However, it is mostly the case to get poor ZST outputs. For instance, mixed language on top of wrong translations.
What you are going to replicate below answers the question -- how to further improve over the naive zero-shot inference leveraging a baseline multiligual model. For further details on the approach see the paper.
./setup-env.sh
or see dependecies for each repo.
For this experiment, we use the TED Talks data from Qi et al..
./scripts/get-ted-talks-data.sh
Following the scenario, (X, Y, P
) languages, lets take Italian/X (it), Romanian/Y (ro), and English/P (en).
./scripts/preprocess.sh 'it ro'
We assume en
as the target for the it, ro
source. In total we process a 4 direction multilingual training data.
./pretrain-baseline.sh
Before the ZST training, lets extract n-way
parallel evaluation data (e.g. X-P-Y
) from the X-P
and Y-P
pairs. This is important for evaluating the X<>Y
ZST pair or the alternative pivoting translation X<>P<>Y
.
./scripts/get-n-way-parallel-data.sh [zst-src-lang-id] [zst-tgt-lang-id] [pivot-lang-id]
./train-zst-model.sh [zst-src-lang-id] [zst-tgt-lang-id] [pre-trained-model-dir] [zst-training-rounds] [gpu-id]
Takes a preprocessed source file, translate and evaluates. For src-pivot-tgt pivot based evaluation, specify the pivot language id.
./translate_evaluate.sh [data-bin-dir] [src-input] [model] [gpu-id] [src-lang-id] [tgt-lang-id] [pivot-lang-id]
@article{lakew2018improving,
title={Improving zero-shot translation of low-resource languages},
author={Lakew, Surafel M and Lotito, Quintino F and Negri, Matteo and Turchi, Marco and Federico, Marcello},
journal={arXiv preprint arXiv:1811.01389},
year={2018}
}
@article{lakew2019multilingual,
title={Multilingual Neural Machine Translation for Zero-Resource Languages},
author={Lakew, Surafel M and Federico, Marcello and Negri, Matteo and Turchi, Marco},
journal={arXiv preprint arXiv:1909.07342},
year={2019}
}