Identifying Where to Focus in Reading Comprehension for Neural Question Generation

Xinya Du, Claire Cardie


Abstract
A first step in the task of automatically generating questions for testing reading comprehension is to identify question-worthy sentences, i.e. sentences in a text passage that humans find it worthwhile to ask questions about. We propose a hierarchical neural sentence-level sequence tagging model for this task, which existing approaches to question generation have ignored. The approach is fully data-driven — with no sophisticated NLP pipelines or any hand-crafted rules/features — and compares favorably to a number of baselines when evaluated on the SQuAD data set. When incorporated into an existing neural question generation system, the resulting end-to-end system achieves state-of-the-art performance for paragraph-level question generation for reading comprehension.
Anthology ID:
D17-1219
Volume:
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
Month:
September
Year:
2017
Address:
Copenhagen, Denmark
Editors:
Martha Palmer, Rebecca Hwa, Sebastian Riedel
Venue:
EMNLP
SIG:
SIGDAT
Publisher:
Association for Computational Linguistics
Note:
Pages:
2067–2073
Language:
URL:
https://aclanthology.org/D17-1219
DOI:
10.18653/v1/D17-1219
Bibkey:
Cite (ACL):
Xinya Du and Claire Cardie. 2017. Identifying Where to Focus in Reading Comprehension for Neural Question Generation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2067–2073, Copenhagen, Denmark. Association for Computational Linguistics.
Cite (Informal):
Identifying Where to Focus in Reading Comprehension for Neural Question Generation (Du & Cardie, EMNLP 2017)
Copy Citation:
PDF:
https://aclanthology.org/D17-1219.pdf
Attachment:
 D17-1219.Attachment.pdf
Data
SQuAD