Multi-Reference Training with Pseudo-References for Neural Translation and Text Generation

Renjie Zheng, Mingbo Ma, Liang Huang


Abstract
Neural text generation, including neural machine translation, image captioning, and summarization, has been quite successful recently. However, during training time, typically only one reference is considered for each example, even though there are often multiple references available, e.g., 4 references in NIST MT evaluations, and 5 references in image captioning data. We first investigate several different ways of utilizing multiple human references during training. But more importantly, we then propose an algorithm to generate exponentially many pseudo-references by first compressing existing human references into lattices and then traversing them to generate new pseudo-references. These approaches lead to substantial improvements over strong baselines in both machine translation (+1.5 BLEU) and image captioning (+3.1 BLEU / +11.7 CIDEr).
Anthology ID:
D18-1357
Volume:
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing
Month:
October-November
Year:
2018
Address:
Brussels, Belgium
Editors:
Ellen Riloff, David Chiang, Julia Hockenmaier, Jun’ichi Tsujii
Venue:
EMNLP
SIG:
SIGDAT
Publisher:
Association for Computational Linguistics
Note:
Pages:
3188–3197
Language:
URL:
https://aclanthology.org/D18-1357
DOI:
10.18653/v1/D18-1357
Bibkey:
Cite (ACL):
Renjie Zheng, Mingbo Ma, and Liang Huang. 2018. Multi-Reference Training with Pseudo-References for Neural Translation and Text Generation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3188–3197, Brussels, Belgium. Association for Computational Linguistics.
Cite (Informal):
Multi-Reference Training with Pseudo-References for Neural Translation and Text Generation (Zheng et al., EMNLP 2018)
Copy Citation:
PDF:
https://aclanthology.org/D18-1357.pdf
Video:
 https://aclanthology.org/D18-1357.mp4
Data
MS COCO