Subgoal Discovery for Hierarchical Dialogue Policy Learning

Da Tang, Xiujun Li, Jianfeng Gao, Chong Wang, Lihong Li, Tony Jebara


Abstract
Developing agents to engage in complex goal-oriented dialogues is challenging partly because the main learning signals are very sparse in long conversations. In this paper, we propose a divide-and-conquer approach that discovers and exploits the hidden structure of the task to enable efficient policy learning. First, given successful example dialogues, we propose the Subgoal Discovery Network (SDN) to divide a complex goal-oriented task into a set of simpler subgoals in an unsupervised fashion. We then use these subgoals to learn a multi-level policy by hierarchical reinforcement learning. We demonstrate our method by building a dialogue agent for the composite task of travel planning. Experiments with simulated and real users show that our approach performs competitively against a state-of-the-art method that requires human-defined subgoals. Moreover, we show that the learned subgoals are often human comprehensible.
Anthology ID:
D18-1253
Volume:
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing
Month:
October-November
Year:
2018
Address:
Brussels, Belgium
Editors:
Ellen Riloff, David Chiang, Julia Hockenmaier, Jun’ichi Tsujii
Venue:
EMNLP
SIG:
SIGDAT
Publisher:
Association for Computational Linguistics
Note:
Pages:
2298–2309
Language:
URL:
https://aclanthology.org/D18-1253
DOI:
10.18653/v1/D18-1253
Bibkey:
Cite (ACL):
Da Tang, Xiujun Li, Jianfeng Gao, Chong Wang, Lihong Li, and Tony Jebara. 2018. Subgoal Discovery for Hierarchical Dialogue Policy Learning. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2298–2309, Brussels, Belgium. Association for Computational Linguistics.
Cite (Informal):
Subgoal Discovery for Hierarchical Dialogue Policy Learning (Tang et al., EMNLP 2018)
Copy Citation:
PDF:
https://aclanthology.org/D18-1253.pdf
Video:
 https://vimeo.com/305937184