Topic Sensitive Attention on Generic Corpora Corrects Sense Bias in Pretrained Embeddings

Vihari Piratla, Sunita Sarawagi, Soumen Chakrabarti


Abstract
Given a small corpus D_T pertaining to a limited set of focused topics, our goal is to train embeddings that accurately capture the sense of words in the topic in spite of the limited size of D_T. These embeddings may be used in various tasks involving D_T. A popular strategy in limited data settings is to adapt pretrained embeddings E trained on a large corpus. To correct for sense drift, fine-tuning, regularization, projection, and pivoting have been proposed recently. Among these, regularization informed by a word’s corpus frequency performed well, but we improve upon it using a new regularizer based on the stability of its cooccurrence with other words. However, a thorough comparison across ten topics, spanning three tasks, with standardized settings of hyper-parameters, reveals that even the best embedding adaptation strategies provide small gains beyond well-tuned baselines, which many earlier comparisons ignored. In a bold departure from adapting pretrained embeddings, we propose using D_T to probe, attend to, and borrow fragments from any large, topic-rich source corpus (such as Wikipedia), which need not be the corpus used to pretrain embeddings. This step is made scalable and practical by suitable indexing. We reach the surprising conclusion that even limited corpus augmentation is more useful than adapting embeddings, which suggests that non-dominant sense information may be irrevocably obliterated from pretrained embeddings and cannot be salvaged by adaptation.
Anthology ID:
P19-1168
Volume:
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics
Month:
July
Year:
2019
Address:
Florence, Italy
Editors:
Anna Korhonen, David Traum, Lluís Màrquez
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1717–1726
Language:
URL:
https://aclanthology.org/P19-1168
DOI:
10.18653/v1/P19-1168
Bibkey:
Cite (ACL):
Vihari Piratla, Sunita Sarawagi, and Soumen Chakrabarti. 2019. Topic Sensitive Attention on Generic Corpora Corrects Sense Bias in Pretrained Embeddings. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1717–1726, Florence, Italy. Association for Computational Linguistics.
Cite (Informal):
Topic Sensitive Attention on Generic Corpora Corrects Sense Bias in Pretrained Embeddings (Piratla et al., ACL 2019)
Copy Citation:
PDF:
https://aclanthology.org/P19-1168.pdf
Supplementary:
 P19-1168.Supplementary.pdf
Video:
 https://aclanthology.org/P19-1168.mp4
Code
 vihari/focussed_embs