Reranking for Neural Semantic Parsing

Pengcheng Yin, Graham Neubig


Abstract
Semantic parsing considers the task of transducing natural language (NL) utterances into machine executable meaning representations (MRs). While neural network-based semantic parsers have achieved impressive improvements over previous methods, results are still far from perfect, and cursory manual inspection can easily identify obvious problems such as lack of adequacy or coherence of the generated MRs. This paper presents a simple approach to quickly iterate and improve the performance of an existing neural semantic parser by reranking an n-best list of predicted MRs, using features that are designed to fix observed problems with baseline models. We implement our reranker in a competitive neural semantic parser and test on four semantic parsing (GEO, ATIS) and Python code generation (Django, CoNaLa) tasks, improving the strong baseline parser by up to 5.7% absolute in BLEU (CoNaLa) and 2.9% in accuracy (Django), outperforming the best published neural parser results on all four datasets.
Anthology ID:
P19-1447
Volume:
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics
Month:
July
Year:
2019
Address:
Florence, Italy
Editors:
Anna Korhonen, David Traum, Lluís Màrquez
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
4553–4559
Language:
URL:
https://aclanthology.org/P19-1447
DOI:
10.18653/v1/P19-1447
Bibkey:
Cite (ACL):
Pengcheng Yin and Graham Neubig. 2019. Reranking for Neural Semantic Parsing. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4553–4559, Florence, Italy. Association for Computational Linguistics.
Cite (Informal):
Reranking for Neural Semantic Parsing (Yin & Neubig, ACL 2019)
Copy Citation:
PDF:
https://aclanthology.org/P19-1447.pdf
Data
CoNaLaCoNaLa-ExtDjango