Syntax Helps ELMo Understand Semantics: Is Syntax Still Relevant in a Deep Neural Architecture for SRL?

Emma Strubell, Andrew McCallum


Abstract
Do unsupervised methods for learning rich, contextualized token representations obviate the need for explicit modeling of linguistic structure in neural network models for semantic role labeling (SRL)? We address this question by incorporating the massively successful ELMo embeddings (Peters et al., 2018) into LISA (Strubell and McCallum, 2018), a strong, linguistically-informed neural network architecture for SRL. In experiments on the CoNLL-2005 shared task we find that though ELMo out-performs typical word embeddings, beginning to close the gap in F1 between LISA with predicted and gold syntactic parses, syntactically-informed models still out-perform syntax-free models when both use ELMo, especially on out-of-domain data. Our results suggest that linguistic structures are indeed still relevant in this golden age of deep learning for NLP.
Anthology ID:
W18-2904
Volume:
Proceedings of the Workshop on the Relevance of Linguistic Structure in Neural Architectures for NLP
Month:
July
Year:
2018
Address:
Melbourne, Australia
Editors:
Georgiana Dinu, Miguel Ballesteros, Avirup Sil, Sam Bowman, Wael Hamza, Anders Sogaard, Tahira Naseem, Yoav Goldberg
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
19–27
Language:
URL:
https://aclanthology.org/W18-2904
DOI:
10.18653/v1/W18-2904
Bibkey:
Cite (ACL):
Emma Strubell and Andrew McCallum. 2018. Syntax Helps ELMo Understand Semantics: Is Syntax Still Relevant in a Deep Neural Architecture for SRL?. In Proceedings of the Workshop on the Relevance of Linguistic Structure in Neural Architectures for NLP, pages 19–27, Melbourne, Australia. Association for Computational Linguistics.
Cite (Informal):
Syntax Helps ELMo Understand Semantics: Is Syntax Still Relevant in a Deep Neural Architecture for SRL? (Strubell & McCallum, ACL 2018)
Copy Citation:
PDF:
https://aclanthology.org/W18-2904.pdf