Interpretable Word Embedding Contextualization

Kyoung-Rok Jang, Sung-Hyon Myaeng, Sang-Bum Kim


Abstract
In this paper, we propose a method of calibrating a word embedding, so that the semantic it conveys becomes more relevant to the context. Our method is novel because the output shows clearly which senses that were originally presented in a target word embedding become stronger or weaker. This is possible by utilizing the technique of using sparse coding to recover senses that comprises a word embedding.
Anthology ID:
W18-5442
Volume:
Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP
Month:
November
Year:
2018
Address:
Brussels, Belgium
Editors:
Tal Linzen, Grzegorz Chrupała, Afra Alishahi
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
341–343
Language:
URL:
https://aclanthology.org/W18-5442
DOI:
10.18653/v1/W18-5442
Bibkey:
Cite (ACL):
Kyoung-Rok Jang, Sung-Hyon Myaeng, and Sang-Bum Kim. 2018. Interpretable Word Embedding Contextualization. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 341–343, Brussels, Belgium. Association for Computational Linguistics.
Cite (Informal):
Interpretable Word Embedding Contextualization (Jang et al., EMNLP 2018)
Copy Citation:
PDF:
https://aclanthology.org/W18-5442.pdf