An Empirical Study of Building a Strong Baseline for Constituency Parsing

Jun Suzuki, Sho Takase, Hidetaka Kamigaito, Makoto Morishita, Masaaki Nagata


Abstract
This paper investigates the construction of a strong baseline based on general purpose sequence-to-sequence models for constituency parsing. We incorporate several techniques that were mainly developed in natural language generation tasks, e.g., machine translation and summarization, and demonstrate that the sequence-to-sequence model achieves the current top-notch parsers’ performance (almost) without requiring any explicit task-specific knowledge or architecture of constituent parsing.
Anthology ID:
P18-2097
Volume:
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
Month:
July
Year:
2018
Address:
Melbourne, Australia
Editors:
Iryna Gurevych, Yusuke Miyao
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
612–618
Language:
URL:
https://aclanthology.org/P18-2097
DOI:
10.18653/v1/P18-2097
Bibkey:
Cite (ACL):
Jun Suzuki, Sho Takase, Hidetaka Kamigaito, Makoto Morishita, and Masaaki Nagata. 2018. An Empirical Study of Building a Strong Baseline for Constituency Parsing. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 612–618, Melbourne, Australia. Association for Computational Linguistics.
Cite (Informal):
An Empirical Study of Building a Strong Baseline for Constituency Parsing (Suzuki et al., ACL 2018)
Copy Citation:
PDF:
https://aclanthology.org/P18-2097.pdf
Note:
 P18-2097.Notes.pdf
Code
 nttcslab-nlp/strong_s2s_baseline_parser
Data
Penn Treebank