An empirical study on the effectiveness of images in Multimodal Neural Machine Translation

Jean-Benoit Delbrouck, Stéphane Dupont


Abstract
In state-of-the-art Neural Machine Translation (NMT), an attention mechanism is used during decoding to enhance the translation. At every step, the decoder uses this mechanism to focus on different parts of the source sentence to gather the most useful information before outputting its target word. Recently, the effectiveness of the attention mechanism has also been explored for multi-modal tasks, where it becomes possible to focus both on sentence parts and image regions that they describe. In this paper, we compare several attention mechanism on the multi-modal translation task (English, image → German) and evaluate the ability of the model to make use of images to improve translation. We surpass state-of-the-art scores on the Multi30k data set, we nevertheless identify and report different misbehavior of the machine while translating.
Anthology ID:
D17-1095
Volume:
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
Month:
September
Year:
2017
Address:
Copenhagen, Denmark
Editors:
Martha Palmer, Rebecca Hwa, Sebastian Riedel
Venue:
EMNLP
SIG:
SIGDAT
Publisher:
Association for Computational Linguistics
Note:
Pages:
910–919
Language:
URL:
https://aclanthology.org/D17-1095
DOI:
10.18653/v1/D17-1095
Bibkey:
Cite (ACL):
Jean-Benoit Delbrouck and Stéphane Dupont. 2017. An empirical study on the effectiveness of images in Multimodal Neural Machine Translation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 910–919, Copenhagen, Denmark. Association for Computational Linguistics.
Cite (Informal):
An empirical study on the effectiveness of images in Multimodal Neural Machine Translation (Delbrouck & Dupont, EMNLP 2017)
Copy Citation:
PDF:
https://aclanthology.org/D17-1095.pdf