Learning Language Representations for Typology Prediction

Chaitanya Malaviya, Graham Neubig, Patrick Littell


Abstract
One central mystery of neural NLP is what neural models “know” about their subject matter. When a neural machine translation system learns to translate from one language to another, does it learn the syntax or semantics of the languages? Can this knowledge be extracted from the system to fill holes in human scientific knowledge? Existing typological databases contain relatively full feature specifications for only a few hundred languages. Exploiting the existence of parallel texts in more than a thousand languages, we build a massive many-to-one NMT system from 1017 languages into English, and use this to predict information missing from typological databases. Experiments show that the proposed method is able to infer not only syntactic, but also phonological and phonetic inventory features, and improves over a baseline that has access to information about the languages geographic and phylogenetic neighbors.
Anthology ID:
D17-1268
Volume:
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
Month:
September
Year:
2017
Address:
Copenhagen, Denmark
Editors:
Martha Palmer, Rebecca Hwa, Sebastian Riedel
Venue:
EMNLP
SIG:
SIGDAT
Publisher:
Association for Computational Linguistics
Note:
Pages:
2529–2535
Language:
URL:
https://aclanthology.org/D17-1268
DOI:
10.18653/v1/D17-1268
Bibkey:
Cite (ACL):
Chaitanya Malaviya, Graham Neubig, and Patrick Littell. 2017. Learning Language Representations for Typology Prediction. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2529–2535, Copenhagen, Denmark. Association for Computational Linguistics.
Cite (Informal):
Learning Language Representations for Typology Prediction (Malaviya et al., EMNLP 2017)
Copy Citation:
PDF:
https://aclanthology.org/D17-1268.pdf
Code
 chaitanyamalaviya/lang-reps +  additional community code