Lexical Chains meet Word Embeddings in Document-level Statistical Machine Translation

Laura Mascarell


Abstract
Currently under review for EMNLP 2017 The phrase-based Statistical Machine Translation (SMT) approach deals with sentences in isolation, making it difficult to consider discourse context in translation. This poses a challenge for ambiguous words that need discourse knowledge to be correctly translated. We propose a method that benefits from the semantic similarity in lexical chains to improve SMT output by integrating it in a document-level decoder. We focus on word embeddings to deal with the lexical chains, contrary to the traditional approach that uses lexical resources. Experimental results on German-to-English show that our method produces correct translations in up to 88% of the changes, improving the translation in 36%-48% of them over the baseline.
Anthology ID:
W17-4813
Volume:
Proceedings of the Third Workshop on Discourse in Machine Translation
Month:
September
Year:
2017
Address:
Copenhagen, Denmark
Editors:
Bonnie Webber, Andrei Popescu-Belis, Jörg Tiedemann
Venue:
DiscoMT
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
99–109
Language:
URL:
https://aclanthology.org/W17-4813
DOI:
10.18653/v1/W17-4813
Bibkey:
Cite (ACL):
Laura Mascarell. 2017. Lexical Chains meet Word Embeddings in Document-level Statistical Machine Translation. In Proceedings of the Third Workshop on Discourse in Machine Translation, pages 99–109, Copenhagen, Denmark. Association for Computational Linguistics.
Cite (Informal):
Lexical Chains meet Word Embeddings in Document-level Statistical Machine Translation (Mascarell, DiscoMT 2017)
Copy Citation:
PDF:
https://aclanthology.org/W17-4813.pdf
Data
Europarl