How Large a Vocabulary Does Text Classification Need? A Variational Approach to Vocabulary Selection

Wenhu Chen, Yu Su, Yilin Shen, Zhiyu Chen, Xifeng Yan, William Yang Wang


Abstract
With the rapid development in deep learning, deep neural networks have been widely adopted in many real-life natural language applications. Under deep neural networks, a pre-defined vocabulary is required to vectorize text inputs. The canonical approach to select pre-defined vocabulary is based on the word frequency, where a threshold is selected to cut off the long tail distribution. However, we observed that such a simple approach could easily lead to under-sized vocabulary or over-sized vocabulary issues. Therefore, we are interested in understanding how the end-task classification accuracy is related to the vocabulary size and what is the minimum required vocabulary size to achieve a specific performance. In this paper, we provide a more sophisticated variational vocabulary dropout (VVD) based on variational dropout to perform vocabulary selection, which can intelligently select the subset of the vocabulary to achieve the required performance. To evaluate different algorithms on the newly proposed vocabulary selection problem, we propose two new metrics: Area Under Accuracy-Vocab Curve and Vocab Size under X% Accuracy Drop. Through extensive experiments on various NLP classification tasks, our variational framework is shown to significantly outperform the frequency-based and other selection baselines on these metrics.
Anthology ID:
N19-1352
Volume:
Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)
Month:
June
Year:
2019
Address:
Minneapolis, Minnesota
Editors:
Jill Burstein, Christy Doran, Thamar Solorio
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3487–3497
Language:
URL:
https://aclanthology.org/N19-1352
DOI:
10.18653/v1/N19-1352
Bibkey:
Cite (ACL):
Wenhu Chen, Yu Su, Yilin Shen, Zhiyu Chen, Xifeng Yan, and William Yang Wang. 2019. How Large a Vocabulary Does Text Classification Need? A Variational Approach to Vocabulary Selection. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3487–3497, Minneapolis, Minnesota. Association for Computational Linguistics.
Cite (Informal):
How Large a Vocabulary Does Text Classification Need? A Variational Approach to Vocabulary Selection (Chen et al., NAACL 2019)
Copy Citation:
PDF:
https://aclanthology.org/N19-1352.pdf
Video:
 https://aclanthology.org/N19-1352.mp4
Code
 wenhuchen/Variational-Vocabulary-Selection
Data
AG NewsMultiNLISNLI