The Importance of Generation Order in Language Modeling

Nicolas Ford, Daniel Duckworth, Mohammad Norouzi, George Dahl


Abstract
Neural language models are a critical component of state-of-the-art systems for machine translation, summarization, audio transcription, and other tasks. These language models are almost universally autoregressive in nature, generating sentences one token at a time from left to right. This paper studies the influence of token generation order on model quality via a novel two-pass language model that produces partially-filled sentence “templates” and then fills in missing tokens. We compare various strategies for structuring these two passes and observe a surprisingly large variation in model quality. We find the most effective strategy generates function words in the first pass followed by content words in the second. We believe these experimental results justify a more extensive investigation of the generation order for neural language models.
Anthology ID:
D18-1324
Volume:
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing
Month:
October-November
Year:
2018
Address:
Brussels, Belgium
Editors:
Ellen Riloff, David Chiang, Julia Hockenmaier, Jun’ichi Tsujii
Venue:
EMNLP
SIG:
SIGDAT
Publisher:
Association for Computational Linguistics
Note:
Pages:
2942–2946
Language:
URL:
https://aclanthology.org/D18-1324
DOI:
10.18653/v1/D18-1324
Bibkey:
Cite (ACL):
Nicolas Ford, Daniel Duckworth, Mohammad Norouzi, and George Dahl. 2018. The Importance of Generation Order in Language Modeling. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2942–2946, Brussels, Belgium. Association for Computational Linguistics.
Cite (Informal):
The Importance of Generation Order in Language Modeling (Ford et al., EMNLP 2018)
Copy Citation:
PDF:
https://aclanthology.org/D18-1324.pdf