Split and Rephrase: Better Evaluation and Stronger Baselines

Roee Aharoni, Yoav Goldberg


Abstract
Splitting and rephrasing a complex sentence into several shorter sentences that convey the same meaning is a challenging problem in NLP. We show that while vanilla seq2seq models can reach high scores on the proposed benchmark (Narayan et al., 2017), they suffer from memorization of the training set which contains more than 89% of the unique simple sentences from the validation and test sets. To aid this, we present a new train-development-test data split and neural models augmented with a copy-mechanism, outperforming the best reported baseline by 8.68 BLEU and fostering further progress on the task.
Anthology ID:
P18-2114
Volume:
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
Month:
July
Year:
2018
Address:
Melbourne, Australia
Editors:
Iryna Gurevych, Yusuke Miyao
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
719–724
Language:
URL:
https://aclanthology.org/P18-2114
DOI:
10.18653/v1/P18-2114
Bibkey:
Cite (ACL):
Roee Aharoni and Yoav Goldberg. 2018. Split and Rephrase: Better Evaluation and Stronger Baselines. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 719–724, Melbourne, Australia. Association for Computational Linguistics.
Cite (Informal):
Split and Rephrase: Better Evaluation and Stronger Baselines (Aharoni & Goldberg, ACL 2018)
Copy Citation:
PDF:
https://aclanthology.org/P18-2114.pdf
Note:
 P18-2114.Notes.pdf
Presentation:
 P18-2114.Presentation.pdf
Video:
 https://aclanthology.org/P18-2114.mp4
Code
 biu-nlp/sprp-acl2018