Improving Neural Parsing by Disentangling Model Combination and Reranking Effects

Daniel Fried, Mitchell Stern, Dan Klein


Abstract
Recent work has proposed several generative neural models for constituency parsing that achieve state-of-the-art results. Since direct search in these generative models is difficult, they have primarily been used to rescore candidate outputs from base parsers in which decoding is more straightforward. We first present an algorithm for direct search in these generative models. We then demonstrate that the rescoring results are at least partly due to implicit model combination rather than reranking effects. Finally, we show that explicit model combination can improve performance even further, resulting in new state-of-the-art numbers on the PTB of 94.25 F1 when training only on gold data and 94.66 F1 when using external data.
Anthology ID:
P17-2025
Volume:
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
Month:
July
Year:
2017
Address:
Vancouver, Canada
Editors:
Regina Barzilay, Min-Yen Kan
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
161–166
Language:
URL:
https://aclanthology.org/P17-2025
DOI:
10.18653/v1/P17-2025
Bibkey:
Cite (ACL):
Daniel Fried, Mitchell Stern, and Dan Klein. 2017. Improving Neural Parsing by Disentangling Model Combination and Reranking Effects. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 161–166, Vancouver, Canada. Association for Computational Linguistics.
Cite (Informal):
Improving Neural Parsing by Disentangling Model Combination and Reranking Effects (Fried et al., ACL 2017)
Copy Citation:
PDF:
https://aclanthology.org/P17-2025.pdf
Video:
 https://aclanthology.org/P17-2025.mp4
Data
Penn Treebank