Learning To Split and Rephrase From Wikipedia Edit History

Jan A. Botha, Manaal Faruqui, John Alex, Jason Baldridge, Dipanjan Das


Abstract
Split and rephrase is the task of breaking down a sentence into shorter ones that together convey the same meaning. We extract a rich new dataset for this task by mining Wikipedia’s edit history: WikiSplit contains one million naturally occurring sentence rewrites, providing sixty times more distinct split examples and a ninety times larger vocabulary than the WebSplit corpus introduced by Narayan et al. (2017) as a benchmark for this task. Incorporating WikiSplit as training data produces a model with qualitatively better predictions that score 32 BLEU points above the prior best result on the WebSplit benchmark.
Anthology ID:
D18-1080
Volume:
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing
Month:
October-November
Year:
2018
Address:
Brussels, Belgium
Editors:
Ellen Riloff, David Chiang, Julia Hockenmaier, Jun’ichi Tsujii
Venue:
EMNLP
SIG:
SIGDAT
Publisher:
Association for Computational Linguistics
Note:
Pages:
732–737
Language:
URL:
https://aclanthology.org/D18-1080
DOI:
10.18653/v1/D18-1080
Bibkey:
Cite (ACL):
Jan A. Botha, Manaal Faruqui, John Alex, Jason Baldridge, and Dipanjan Das. 2018. Learning To Split and Rephrase From Wikipedia Edit History. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 732–737, Brussels, Belgium. Association for Computational Linguistics.
Cite (Informal):
Learning To Split and Rephrase From Wikipedia Edit History (Botha et al., EMNLP 2018)
Copy Citation:
PDF:
https://aclanthology.org/D18-1080.pdf
Data
WikiSplitWebNLG