Generating Topic-Oriented Summaries Using Neural Attention

Kundan Krishna, Balaji Vasan Srinivasan


Abstract
Summarizing a document requires identifying the important parts of the document with an objective of providing a quick overview to a reader. However, a long article can span several topics and a single summary cannot do justice to all the topics. Further, the interests of readers can vary and the notion of importance can change across them. Existing summarization algorithms generate a single summary and are not capable of generating multiple summaries tuned to the interests of the readers. In this paper, we propose an attention based RNN framework to generate multiple summaries of a single document tuned to different topics of interest. Our method outperforms existing baselines and our results suggest that the attention of generative networks can be successfully biased to look at sentences relevant to a topic and effectively used to generate topic-tuned summaries.
Anthology ID:
N18-1153
Volume:
Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)
Month:
June
Year:
2018
Address:
New Orleans, Louisiana
Editors:
Marilyn Walker, Heng Ji, Amanda Stent
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1697–1705
Language:
URL:
https://aclanthology.org/N18-1153
DOI:
10.18653/v1/N18-1153
Bibkey:
Cite (ACL):
Kundan Krishna and Balaji Vasan Srinivasan. 2018. Generating Topic-Oriented Summaries Using Neural Attention. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1697–1705, New Orleans, Louisiana. Association for Computational Linguistics.
Cite (Informal):
Generating Topic-Oriented Summaries Using Neural Attention (Krishna & Srinivasan, NAACL 2018)
Copy Citation:
PDF:
https://aclanthology.org/N18-1153.pdf
Data
CNN/Daily Mail