Subword-level Word Vector Representations for Korean

Sungjoon Park, Jeongmin Byun, Sion Baek, Yongseok Cho, Alice Oh


Abstract
Research on distributed word representations is focused on widely-used languages such as English. Although the same methods can be used for other languages, language-specific knowledge can enhance the accuracy and richness of word vector representations. In this paper, we look at improving distributed word representations for Korean using knowledge about the unique linguistic structure of Korean. Specifically, we decompose Korean words into the jamo-level, beyond the character-level, allowing a systematic use of subword information. To evaluate the vectors, we develop Korean test sets for word similarity and analogy and make them publicly available. The results show that our simple method outperforms word2vec and character-level Skip-Grams on semantic and syntactic similarity and analogy tasks and contributes positively toward downstream NLP tasks such as sentiment analysis.
Anthology ID:
P18-1226
Volume:
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
July
Year:
2018
Address:
Melbourne, Australia
Editors:
Iryna Gurevych, Yusuke Miyao
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2429–2438
Language:
URL:
https://aclanthology.org/P18-1226
DOI:
10.18653/v1/P18-1226
Bibkey:
Cite (ACL):
Sungjoon Park, Jeongmin Byun, Sion Baek, Yongseok Cho, and Alice Oh. 2018. Subword-level Word Vector Representations for Korean. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2429–2438, Melbourne, Australia. Association for Computational Linguistics.
Cite (Informal):
Subword-level Word Vector Representations for Korean (Park et al., ACL 2018)
Copy Citation:
PDF:
https://aclanthology.org/P18-1226.pdf
Poster:
 P18-1226.Poster.pdf
Code
 SungjoonPark/KoreanWordVectors