Generating Reasonable and Diversified Story Ending Using Sequence to Sequence Model with Adversarial Training

Zhongyang Li, Xiao Ding, Ting Liu


Abstract
Story generation is a challenging problem in artificial intelligence (AI) and has received a lot of interests in the natural language processing (NLP) community. Most previous work tried to solve this problem using Sequence to Sequence (Seq2Seq) model trained with Maximum Likelihood Estimation (MLE). However, the pure MLE training objective much limits the power of Seq2Seq model in generating high-quality storys. In this paper, we propose using adversarial training augmented Seq2Seq model to generate reasonable and diversified story endings given a story context. Our model includes a generator that defines the policy of generating a story ending, and a discriminator that labels story endings as human-generated or machine-generated. Carefully designed human and automatic evaluation metrics demonstrate that our adversarial training augmented Seq2Seq model can generate more reasonable and diversified story endings compared to purely MLE-trained Seq2Seq model. Moreover, our model achieves better performance on the task of Story Cloze Test with an accuracy of 62.6% compared with state-of-the-art baseline methods.
Anthology ID:
C18-1088
Volume:
Proceedings of the 27th International Conference on Computational Linguistics
Month:
August
Year:
2018
Address:
Santa Fe, New Mexico, USA
Editors:
Emily M. Bender, Leon Derczynski, Pierre Isabelle
Venue:
COLING
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1033–1043
Language:
URL:
https://aclanthology.org/C18-1088
DOI:
Bibkey:
Cite (ACL):
Zhongyang Li, Xiao Ding, and Ting Liu. 2018. Generating Reasonable and Diversified Story Ending Using Sequence to Sequence Model with Adversarial Training. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1033–1043, Santa Fe, New Mexico, USA. Association for Computational Linguistics.
Cite (Informal):
Generating Reasonable and Diversified Story Ending Using Sequence to Sequence Model with Adversarial Training (Li et al., COLING 2018)
Copy Citation:
PDF:
https://aclanthology.org/C18-1088.pdf
Data
ROCStories