Language Models Learn POS First

Naomi Saphra, Adam Lopez


Abstract
A glut of recent research shows that language models capture linguistic structure. Such work answers the question of whether a model represents linguistic structure. But how and when are these structures acquired? Rather than treating the training process itself as a black box, we investigate how representations of linguistic structure are learned over time. In particular, we demonstrate that different aspects of linguistic structure are learned at different rates, with part of speech tagging acquired early and global topic information learned continuously.
Anthology ID:
W18-5438
Volume:
Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP
Month:
November
Year:
2018
Address:
Brussels, Belgium
Editors:
Tal Linzen, Grzegorz Chrupała, Afra Alishahi
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
328–330
Language:
URL:
https://aclanthology.org/W18-5438
DOI:
10.18653/v1/W18-5438
Bibkey:
Cite (ACL):
Naomi Saphra and Adam Lopez. 2018. Language Models Learn POS First. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 328–330, Brussels, Belgium. Association for Computational Linguistics.
Cite (Informal):
Language Models Learn POS First (Saphra & Lopez, EMNLP 2018)
Copy Citation:
PDF:
https://aclanthology.org/W18-5438.pdf