Huan Wang


2024

pdf bib
DialogStudio: Towards Richest and Most Diverse Unified Dataset Collection for Conversational AI
Jianguo Zhang | Kun Qian | Zhiwei Liu | Shelby Heinecke | Rui Meng | Ye Liu | Zhou Yu | Huan Wang | Silvio Savarese | Caiming Xiong
Findings of the Association for Computational Linguistics: EACL 2024

Despite advancements in conversational AI, language models encounter challenges to handle diverse conversational tasks, and existing dialogue dataset collections often lack diversity and comprehensiveness. To tackle these issues, we introduce DialogStudio: the largest and most diverse collection of dialogue datasets, unified under a consistent format while preserving their original information. Our collection encompasses data from open-domain dialogues, task-oriented dialogues, natural language understanding, conversational recommendation, dialogue summarization, and knowledge-grounded dialogues, making it an incredibly rich and diverse resource for dialogue research and model training.To further enhance the utility of DialogStudio, we identify the licenses for each dataset, design external knowledge and domain-aware prompts for selected dialogues to facilitate instruction-aware fine-tuning. To improve transparency and support dataset and task-based research, as well as language model pre-training, all datasets, licenses, codes, and models associated with DialogStudio will be made publicly accessible.

2023

pdf bib
Enhancing Performance on Seen and Unseen Dialogue Scenarios using Retrieval-Augmented End-to-End Task-Oriented System
Jianguo Zhang | Stephen Roller | Kun Qian | Zhiwei Liu | Rui Meng | Shelby Heinecke | Huan Wang | Silvio Savarese | Caiming Xiong
Proceedings of the 24th Annual Meeting of the Special Interest Group on Discourse and Dialogue

End-to-end task-oriented dialogue (TOD) systems have achieved promising performance by leveraging sophisticated natural language understanding and natural language generation capabilities of pre-trained models. This work enables the TOD systems with more flexibility through a simple cache. The cache provides the flexibility to dynamically update the TOD systems and handle both existing and unseen dialogue scenarios. Towards this end, we first fine-tune a retrieval module to effectively retrieve the most relevant information entries from the cache. We then train end-to-end TOD models that can refer to and ground on both dialogue history and retrieved information during TOD generation. The introduced cache is straightforward to construct, and the backbone models of TOD systems are compatible with existing pre-trained generative models. Extensive experiments demonstrate the superior performance of our framework, with a notable improvement in non-empty joint goal accuracy by 6.7% compared to strong baselines.

2021

pdf bib
BatchMixup: Improving Training by Interpolating Hidden States of the Entire Mini-batch
Wenpeng Yin | Huan Wang | Jin Qu | Caiming Xiong
Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021

pdf bib
Unsupervised Paraphrasing with Pretrained Language Models
Tong Niu | Semih Yavuz | Yingbo Zhou | Nitish Shirish Keskar | Huan Wang | Caiming Xiong
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing

Paraphrase generation has benefited extensively from recent progress in the designing of training objectives and model architectures. However, previous explorations have largely focused on supervised methods, which require a large amount of labeled data that is costly to collect. To address this drawback, we adopt a transfer learning approach and propose a training pipeline that enables pre-trained language models to generate high-quality paraphrases in an unsupervised setting. Our recipe consists of task-adaptation, self-supervision, and a novel decoding algorithm named Dynamic Blocking (DB). To enforce a surface form dissimilar from the input, whenever the language model emits a token contained in the source sequence, DB prevents the model from outputting the subsequent source token for the next generation step. We show with automatic and human evaluations that our approach achieves state-of-the-art performance on both the Quora Question Pair (QQP) and the ParaNMT datasets and is robust to domain shift between the two datasets of distinct distributions. We also demonstrate that our model transfers to paraphrasing in other languages without any additional finetuning.

2016

pdf bib
A Constituent Syntactic Parse Tree Based Discourse Parser
Zhongyi Li | Hai Zhao | Chenxi Pang | Lili Wang | Huan Wang
Proceedings of the CoNLL-16 shared task

2002

pdf bib
PCFG Parsing for Restricted Classical Chinese Texts
Liang Huang | Yinan Peng | Huan Wang | Zhenyu Wu
COLING-02: The First SIGHAN Workshop on Chinese Language Processing