Injecting Lexical Contrast into Word Vectors by Guiding Vector Space Specialisation

Ivan Vulić


Abstract
Word vector space specialisation models offer a portable, light-weight approach to fine-tuning arbitrary distributional vector spaces to discern between synonymy and antonymy. Their effectiveness is drawn from external linguistic constraints that specify the exact lexical relation between words. In this work, we show that a careful selection of the external constraints can steer and improve the specialisation. By simply selecting appropriate constraints, we report state-of-the-art results on a suite of tasks with well-defined benchmarks where modeling lexical contrast is crucial: 1) true semantic similarity, with highest reported scores on SimLex-999 and SimVerb-3500 to date; 2) detecting antonyms; and 3) distinguishing antonyms from synonyms.
Anthology ID:
W18-3018
Volume:
Proceedings of the Third Workshop on Representation Learning for NLP
Month:
July
Year:
2018
Address:
Melbourne, Australia
Editors:
Isabelle Augenstein, Kris Cao, He He, Felix Hill, Spandana Gella, Jamie Kiros, Hongyuan Mei, Dipendra Misra
Venue:
RepL4NLP
SIG:
SIGREP
Publisher:
Association for Computational Linguistics
Note:
Pages:
137–143
Language:
URL:
https://aclanthology.org/W18-3018
DOI:
10.18653/v1/W18-3018
Bibkey:
Cite (ACL):
Ivan Vulić. 2018. Injecting Lexical Contrast into Word Vectors by Guiding Vector Space Specialisation. In Proceedings of the Third Workshop on Representation Learning for NLP, pages 137–143, Melbourne, Australia. Association for Computational Linguistics.
Cite (Informal):
Injecting Lexical Contrast into Word Vectors by Guiding Vector Space Specialisation (Vulić, RepL4NLP 2018)
Copy Citation:
PDF:
https://aclanthology.org/W18-3018.pdf