A Structured Review of the Validity of BLEU

Ehud Reiter


Abstract
The BLEU metric has been widely used in NLP for over 15 years to evaluate NLP systems, especially in machine translation and natural language generation. I present a structured review of the evidence on whether BLEU is a valid evaluation technique—in other words, whether BLEU scores correlate with real-world utility and user-satisfaction of NLP systems; this review covers 284 correlations reported in 34 papers. Overall, the evidence supports using BLEU for diagnostic evaluation of MT systems (which is what it was originally proposed for), but does not support using BLEU outside of MT, for evaluation of individual texts, or for scientific hypothesis testing.
Anthology ID:
J18-3002
Volume:
Computational Linguistics, Volume 44, Issue 3 - September 2018
Month:
September
Year:
2018
Address:
Cambridge, MA
Venue:
CL
SIG:
Publisher:
MIT Press
Note:
Pages:
393–401
Language:
URL:
https://aclanthology.org/J18-3002
DOI:
10.1162/coli_a_00322
Bibkey:
Cite (ACL):
Ehud Reiter. 2018. A Structured Review of the Validity of BLEU. Computational Linguistics, 44(3):393–401.
Cite (Informal):
A Structured Review of the Validity of BLEU (Reiter, CL 2018)
Copy Citation:
PDF:
https://aclanthology.org/J18-3002.pdf
Data
WMT 2016