Towards Compact and Fast Neural Machine Translation Using a Combined Method

Xiaowei Zhang, Wei Chen, Feng Wang, Shuang Xu, Bo Xu


Abstract
Neural Machine Translation (NMT) lays intensive burden on computation and memory cost. It is a challenge to deploy NMT models on the devices with limited computation and memory budgets. This paper presents a four stage pipeline to compress model and speed up the decoding for NMT. Our method first introduces a compact architecture based on convolutional encoder and weight shared embeddings. Then weight pruning is applied to obtain a sparse model. Next, we propose a fast sequence interpolation approach which enables the greedy decoding to achieve performance on par with the beam search. Hence, the time-consuming beam search can be replaced by simple greedy decoding. Finally, vocabulary selection is used to reduce the computation of softmax layer. Our final model achieves 10 times speedup, 17 times parameters reduction, less than 35MB storage size and comparable performance compared to the baseline model.
Anthology ID:
D17-1154
Volume:
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
Month:
September
Year:
2017
Address:
Copenhagen, Denmark
Editors:
Martha Palmer, Rebecca Hwa, Sebastian Riedel
Venue:
EMNLP
SIG:
SIGDAT
Publisher:
Association for Computational Linguistics
Note:
Pages:
1475–1481
Language:
URL:
https://aclanthology.org/D17-1154
DOI:
10.18653/v1/D17-1154
Bibkey:
Cite (ACL):
Xiaowei Zhang, Wei Chen, Feng Wang, Shuang Xu, and Bo Xu. 2017. Towards Compact and Fast Neural Machine Translation Using a Combined Method. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1475–1481, Copenhagen, Denmark. Association for Computational Linguistics.
Cite (Informal):
Towards Compact and Fast Neural Machine Translation Using a Combined Method (Zhang et al., EMNLP 2017)
Copy Citation:
PDF:
https://aclanthology.org/D17-1154.pdf