Reinforced Extractive Summarization with Question-Focused Rewards

Kristjan Arumae, Fei Liu


Abstract
We investigate a new training paradigm for extractive summarization. Traditionally, human abstracts are used to derive goldstandard labels for extraction units. However, the labels are often inaccurate, because human abstracts and source documents cannot be easily aligned at the word level. In this paper we convert human abstracts to a set of Cloze-style comprehension questions. System summaries are encouraged to preserve salient source content useful for answering questions and share common words with the abstracts. We use reinforcement learning to explore the space of possible extractive summaries and introduce a question-focused reward function to promote concise, fluent, and informative summaries. Our experiments show that the proposed method is effective. It surpasses state-of-the-art systems on the standard summarization dataset.
Anthology ID:
P18-3015
Volume:
Proceedings of ACL 2018, Student Research Workshop
Month:
July
Year:
2018
Address:
Melbourne, Australia
Editors:
Vered Shwartz, Jeniya Tabassum, Rob Voigt, Wanxiang Che, Marie-Catherine de Marneffe, Malvina Nissim
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
105–111
Language:
URL:
https://aclanthology.org/P18-3015
DOI:
10.18653/v1/P18-3015
Bibkey:
Cite (ACL):
Kristjan Arumae and Fei Liu. 2018. Reinforced Extractive Summarization with Question-Focused Rewards. In Proceedings of ACL 2018, Student Research Workshop, pages 105–111, Melbourne, Australia. Association for Computational Linguistics.
Cite (Informal):
Reinforced Extractive Summarization with Question-Focused Rewards (Arumae & Liu, ACL 2018)
Copy Citation:
PDF:
https://aclanthology.org/P18-3015.pdf
Presentation:
 P18-3015.Presentation.pdf