Are BLEU and Meaning Representation in Opposition?

Ondřej Cífka, Ondřej Bojar


Abstract
One of possible ways of obtaining continuous-space sentence representations is by training neural machine translation (NMT) systems. The recent attention mechanism however removes the single point in the neural network from which the source sentence representation can be extracted. We propose several variations of the attentive NMT architecture bringing this meeting point back. Empirical evaluation suggests that the better the translation quality, the worse the learned sentence representations serve in a wide range of classification and similarity tasks.
Anthology ID:
P18-1126
Volume:
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
July
Year:
2018
Address:
Melbourne, Australia
Editors:
Iryna Gurevych, Yusuke Miyao
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1362–1371
Language:
URL:
https://aclanthology.org/P18-1126
DOI:
10.18653/v1/P18-1126
Bibkey:
Cite (ACL):
Ondřej Cífka and Ondřej Bojar. 2018. Are BLEU and Meaning Representation in Opposition?. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1362–1371, Melbourne, Australia. Association for Computational Linguistics.
Cite (Informal):
Are BLEU and Meaning Representation in Opposition? (Cífka & Bojar, ACL 2018)
Copy Citation:
PDF:
https://aclanthology.org/P18-1126.pdf
Note:
 P18-1126.Notes.pdf
Presentation:
 P18-1126.Presentation.pdf
Video:
 https://vimeo.com/285803604
Data
MS COCO