Fluency Boost Learning and Inference for Neural Grammatical Error Correction

Tao Ge, Furu Wei, Ming Zhou


Abstract
Most of the neural sequence-to-sequence (seq2seq) models for grammatical error correction (GEC) have two limitations: (1) a seq2seq model may not be well generalized with only limited error-corrected data; (2) a seq2seq model may fail to completely correct a sentence with multiple errors through normal seq2seq inference. We attempt to address these limitations by proposing a fluency boost learning and inference mechanism. Fluency boosting learning generates fluency-boost sentence pairs during training, enabling the error correction model to learn how to improve a sentence’s fluency from more instances, while fluency boosting inference allows the model to correct a sentence incrementally with multiple inference steps until the sentence’s fluency stops increasing. Experiments show our approaches improve the performance of seq2seq models for GEC, achieving state-of-the-art results on both CoNLL-2014 and JFLEG benchmark datasets.
Anthology ID:
P18-1097
Volume:
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
July
Year:
2018
Address:
Melbourne, Australia
Editors:
Iryna Gurevych, Yusuke Miyao
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1055–1065
Language:
URL:
https://aclanthology.org/P18-1097
DOI:
10.18653/v1/P18-1097
Bibkey:
Cite (ACL):
Tao Ge, Furu Wei, and Ming Zhou. 2018. Fluency Boost Learning and Inference for Neural Grammatical Error Correction. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1055–1065, Melbourne, Australia. Association for Computational Linguistics.
Cite (Informal):
Fluency Boost Learning and Inference for Neural Grammatical Error Correction (Ge et al., ACL 2018)
Copy Citation:
PDF:
https://aclanthology.org/P18-1097.pdf
Note:
 P18-1097.Notes.pdf
Video:
 https://vimeo.com/285802209
Data
CoNLL-2014 Shared Task: Grammatical Error CorrectionJFLEG