Learning to Remember Translation History with a Continuous Cache

Zhaopeng Tu, Yang Liu, Shuming Shi, Tong Zhang


Abstract
Existing neural machine translation (NMT) models generally translate sentences in isolation, missing the opportunity to take advantage of document-level information. In this work, we propose to augment NMT models with a very light-weight cache-like memory network, which stores recent hidden representations as translation history. The probability distribution over generated words is updated online depending on the translation history retrieved from the memory, endowing NMT models with the capability to dynamically adapt over time. Experiments on multiple domains with different topics and styles show the effectiveness of the proposed approach with negligible impact on the computational cost.
Anthology ID:
Q18-1029
Volume:
Transactions of the Association for Computational Linguistics, Volume 6
Month:
Year:
2018
Address:
Cambridge, MA
Editors:
Lillian Lee, Mark Johnson, Kristina Toutanova, Brian Roark
Venue:
TACL
SIG:
Publisher:
MIT Press
Note:
Pages:
407–420
Language:
URL:
https://aclanthology.org/Q18-1029
DOI:
10.1162/tacl_a_00029
Bibkey:
Cite (ACL):
Zhaopeng Tu, Yang Liu, Shuming Shi, and Tong Zhang. 2018. Learning to Remember Translation History with a Continuous Cache. Transactions of the Association for Computational Linguistics, 6:407–420.
Cite (Informal):
Learning to Remember Translation History with a Continuous Cache (Tu et al., TACL 2018)
Copy Citation:
PDF:
https://aclanthology.org/Q18-1029.pdf
Code
 longyuewangdcu/tvsub