Refining Pretrained Word Embeddings Using Layer-wise Relevance Propagation

Akira Utsumi


Abstract
In this paper, we propose a simple method for refining pretrained word embeddings using layer-wise relevance propagation. Given a target semantic representation one would like word vectors to reflect, our method first trains the mapping between the original word vectors and the target representation using a neural network. Estimated target values are then propagated backward toward word vectors, and a relevance score is computed for each dimension of word vectors. Finally, the relevance score vectors are used to refine the original word vectors so that they are projected into the subspace that reflects the information relevant to the target representation. The evaluation experiment using binary classification of word pairs demonstrates that the refined vectors by our method achieve the higher performance than the original vectors.
Anthology ID:
D18-1520
Volume:
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing
Month:
October-November
Year:
2018
Address:
Brussels, Belgium
Editors:
Ellen Riloff, David Chiang, Julia Hockenmaier, Jun’ichi Tsujii
Venue:
EMNLP
SIG:
SIGDAT
Publisher:
Association for Computational Linguistics
Note:
Pages:
4840–4846
Language:
URL:
https://aclanthology.org/D18-1520
DOI:
10.18653/v1/D18-1520
Bibkey:
Cite (ACL):
Akira Utsumi. 2018. Refining Pretrained Word Embeddings Using Layer-wise Relevance Propagation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4840–4846, Brussels, Belgium. Association for Computational Linguistics.
Cite (Informal):
Refining Pretrained Word Embeddings Using Layer-wise Relevance Propagation (Utsumi, EMNLP 2018)
Copy Citation:
PDF:
https://aclanthology.org/D18-1520.pdf