Word Embeddings for Code-Mixed Language Processing

Adithya Pratapa, Monojit Choudhury, Sunayana Sitaram


Abstract
We compare three existing bilingual word embedding approaches, and a novel approach of training skip-grams on synthetic code-mixed text generated through linguistic models of code-mixing, on two tasks - sentiment analysis and POS tagging for code-mixed text. Our results show that while CVM and CCA based embeddings perform as well as the proposed embedding technique on semantic and syntactic tasks respectively, the proposed approach provides the best performance for both tasks overall. Thus, this study demonstrates that existing bilingual embedding techniques are not ideal for code-mixed text processing and there is a need for learning multilingual word embedding from the code-mixed text.
Anthology ID:
D18-1344
Volume:
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing
Month:
October-November
Year:
2018
Address:
Brussels, Belgium
Editors:
Ellen Riloff, David Chiang, Julia Hockenmaier, Jun’ichi Tsujii
Venue:
EMNLP
SIG:
SIGDAT
Publisher:
Association for Computational Linguistics
Note:
Pages:
3067–3072
Language:
URL:
https://aclanthology.org/D18-1344
DOI:
10.18653/v1/D18-1344
Bibkey:
Cite (ACL):
Adithya Pratapa, Monojit Choudhury, and Sunayana Sitaram. 2018. Word Embeddings for Code-Mixed Language Processing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3067–3072, Brussels, Belgium. Association for Computational Linguistics.
Cite (Informal):
Word Embeddings for Code-Mixed Language Processing (Pratapa et al., EMNLP 2018)
Copy Citation:
PDF:
https://aclanthology.org/D18-1344.pdf