What we need to learn if we want to do and not just talk

Rashmi Gangadharaiah, Balakrishnan Narayanaswamy, Charles Elkan


Abstract
In task-oriented dialog, agents need to generate both fluent natural language responses and correct external actions like database queries and updates. Our paper makes the first attempt at evaluating state of the art models on a large real world task with human users. We show that methods that achieve state of the art performance on synthetic datasets, perform poorly in real world dialog tasks. We propose a hybrid model, where nearest neighbor is used to generate fluent responses and Seq2Seq type models ensure dialogue coherency and generate accurate external actions. The hybrid model on the customer support data achieves a 78% relative improvement in fluency, and a 200% improvement in accuracy of external calls.
Anthology ID:
N18-3004
Volume:
Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 3 (Industry Papers)
Month:
June
Year:
2018
Address:
New Orleans - Louisiana
Editors:
Srinivas Bangalore, Jennifer Chu-Carroll, Yunyao Li
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
25–32
Language:
URL:
https://aclanthology.org/N18-3004
DOI:
10.18653/v1/N18-3004
Bibkey:
Cite (ACL):
Rashmi Gangadharaiah, Balakrishnan Narayanaswamy, and Charles Elkan. 2018. What we need to learn if we want to do and not just talk. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 3 (Industry Papers), pages 25–32, New Orleans - Louisiana. Association for Computational Linguistics.
Cite (Informal):
What we need to learn if we want to do and not just talk (Gangadharaiah et al., NAACL 2018)
Copy Citation:
PDF:
https://aclanthology.org/N18-3004.pdf
Video:
 https://aclanthology.org/N18-3004.mp4