Deep Reinforcement Learning for NLP

William Yang Wang, Jiwei Li, Xiaodong He


Abstract
Many Natural Language Processing (NLP) tasks (including generation, language grounding, reasoning, information extraction, coreference resolution, and dialog) can be formulated as deep reinforcement learning (DRL) problems. However, since language is often discrete and the space for all sentences is infinite, there are many challenges for formulating reinforcement learning problems of NLP tasks. In this tutorial, we provide a gentle introduction to the foundation of deep reinforcement learning, as well as some practical DRL solutions in NLP. We describe recent advances in designing deep reinforcement learning for NLP, with a special focus on generation, dialogue, and information extraction. Finally, we discuss why they succeed, and when they may fail, aiming at providing some practical advice about deep reinforcement learning for solving real-world NLP problems.
Anthology ID:
P18-5007
Volume:
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics: Tutorial Abstracts
Month:
July
Year:
2018
Address:
Melbourne, Australia
Editors:
Yoav Artzi, Jacob Eisenstein
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
19–21
Language:
URL:
https://aclanthology.org/P18-5007
DOI:
10.18653/v1/P18-5007
Bibkey:
Cite (ACL):
William Yang Wang, Jiwei Li, and Xiaodong He. 2018. Deep Reinforcement Learning for NLP. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics: Tutorial Abstracts, pages 19–21, Melbourne, Australia. Association for Computational Linguistics.
Cite (Informal):
Deep Reinforcement Learning for NLP (Wang et al., ACL 2018)
Copy Citation:
PDF:
https://aclanthology.org/P18-5007.pdf