How to represent a word and predict it, too: Improving tied architectures for language modelling

Kristina Gulordava, Laura Aina, Gemma Boleda


Abstract
Recent state-of-the-art neural language models share the representations of words given by the input and output mappings. We propose a simple modification to these architectures that decouples the hidden state from the word embedding prediction. Our architecture leads to comparable or better results compared to previous tied models and models without tying, with a much smaller number of parameters. We also extend our proposal to word2vec models, showing that tying is appropriate for general word prediction tasks.
Anthology ID:
D18-1323
Volume:
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing
Month:
October-November
Year:
2018
Address:
Brussels, Belgium
Editors:
Ellen Riloff, David Chiang, Julia Hockenmaier, Jun’ichi Tsujii
Venue:
EMNLP
SIG:
SIGDAT
Publisher:
Association for Computational Linguistics
Note:
Pages:
2936–2941
Language:
URL:
https://aclanthology.org/D18-1323
DOI:
10.18653/v1/D18-1323
Bibkey:
Cite (ACL):
Kristina Gulordava, Laura Aina, and Gemma Boleda. 2018. How to represent a word and predict it, too: Improving tied architectures for language modelling. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2936–2941, Brussels, Belgium. Association for Computational Linguistics.
Cite (Informal):
How to represent a word and predict it, too: Improving tied architectures for language modelling (Gulordava et al., EMNLP 2018)
Copy Citation:
PDF:
https://aclanthology.org/D18-1323.pdf
Attachment:
 D18-1323.Attachment.pdf