Conditional Word Embedding and Hypothesis Testing via Bayes-by-Backprop

Rujun Han, Michael Gill, Arthur Spirling, Kyunghyun Cho


Abstract
Conventional word embedding models do not leverage information from document meta-data, and they do not model uncertainty. We address these concerns with a model that incorporates document covariates to estimate conditional word embedding distributions. Our model allows for (a) hypothesis tests about the meanings of terms, (b) assessments as to whether a word is near or far from another conditioned on different covariate values, and (c) assessments as to whether estimated differences are statistically significant.
Anthology ID:
D18-1527
Volume:
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing
Month:
October-November
Year:
2018
Address:
Brussels, Belgium
Editors:
Ellen Riloff, David Chiang, Julia Hockenmaier, Jun’ichi Tsujii
Venue:
EMNLP
SIG:
SIGDAT
Publisher:
Association for Computational Linguistics
Note:
Pages:
4890–4895
Language:
URL:
https://aclanthology.org/D18-1527
DOI:
10.18653/v1/D18-1527
Bibkey:
Cite (ACL):
Rujun Han, Michael Gill, Arthur Spirling, and Kyunghyun Cho. 2018. Conditional Word Embedding and Hypothesis Testing via Bayes-by-Backprop. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4890–4895, Brussels, Belgium. Association for Computational Linguistics.
Cite (Informal):
Conditional Word Embedding and Hypothesis Testing via Bayes-by-Backprop (Han et al., EMNLP 2018)
Copy Citation:
PDF:
https://aclanthology.org/D18-1527.pdf
Attachment:
 D18-1527.Attachment.pdf