Best practices for the human evaluation of automatically generated text

Chris van der Lee, Albert Gatt, Emiel van Miltenburg, Sander Wubben, Emiel Krahmer


Abstract
Currently, there is little agreement as to how Natural Language Generation (NLG) systems should be evaluated. While there is some agreement regarding automatic metrics, there is a high degree of variation in the way that human evaluation is carried out. This paper provides an overview of how human evaluation is currently conducted, and presents a set of best practices, grounded in the literature. With this paper, we hope to contribute to the quality and consistency of human evaluations in NLG.
Anthology ID:
W19-8643
Volume:
Proceedings of the 12th International Conference on Natural Language Generation
Month:
October–November
Year:
2019
Address:
Tokyo, Japan
Editors:
Kees van Deemter, Chenghua Lin, Hiroya Takamura
Venue:
INLG
SIG:
SIGGEN
Publisher:
Association for Computational Linguistics
Note:
Pages:
355–368
Language:
URL:
https://aclanthology.org/W19-8643
DOI:
10.18653/v1/W19-8643
Bibkey:
Cite (ACL):
Chris van der Lee, Albert Gatt, Emiel van Miltenburg, Sander Wubben, and Emiel Krahmer. 2019. Best practices for the human evaluation of automatically generated text. In Proceedings of the 12th International Conference on Natural Language Generation, pages 355–368, Tokyo, Japan. Association for Computational Linguistics.
Cite (Informal):
Best practices for the human evaluation of automatically generated text (van der Lee et al., INLG 2019)
Copy Citation:
PDF:
https://aclanthology.org/W19-8643.pdf
Supplementary attachment:
 W19-8643.Supplementary_Attachment.xlsx