Rotated Word Vector Representations and their Interpretability

Sungjoon Park, JinYeong Bak, Alice Oh


Abstract
Vector representation of words improves performance in various NLP tasks, but the high dimensional word vectors are very difficult to interpret. We apply several rotation algorithms to the vector representation of words to improve the interpretability. Unlike previous approaches that induce sparsity, the rotated vectors are interpretable while preserving the expressive performance of the original vectors. Furthermore, any prebuilt word vector representation can be rotated for improved interpretability. We apply rotation to skipgrams and glove and compare the expressive power and interpretability with the original vectors and the sparse overcomplete vectors. The results show that the rotated vectors outperform the original and the sparse overcomplete vectors for interpretability and expressiveness tasks.
Anthology ID:
D17-1041
Volume:
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
Month:
September
Year:
2017
Address:
Copenhagen, Denmark
Editors:
Martha Palmer, Rebecca Hwa, Sebastian Riedel
Venue:
EMNLP
SIG:
SIGDAT
Publisher:
Association for Computational Linguistics
Note:
Pages:
401–411
Language:
URL:
https://aclanthology.org/D17-1041
DOI:
10.18653/v1/D17-1041
Bibkey:
Cite (ACL):
Sungjoon Park, JinYeong Bak, and Alice Oh. 2017. Rotated Word Vector Representations and their Interpretability. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 401–411, Copenhagen, Denmark. Association for Computational Linguistics.
Cite (Informal):
Rotated Word Vector Representations and their Interpretability (Park et al., EMNLP 2017)
Copy Citation:
PDF:
https://aclanthology.org/D17-1041.pdf
Code
 SungjoonPark/factor_rotation