Cross-lingual Distillation for Text Classification

Ruochen Xu, Yiming Yang


Abstract
Cross-lingual text classification(CLTC) is the task of classifying documents written in different languages into the same taxonomy of categories. This paper presents a novel approach to CLTC that builds on model distillation, which adapts and extends a framework originally proposed for model compression. Using soft probabilistic predictions for the documents in a label-rich language as the (induced) supervisory labels in a parallel corpus of documents, we train classifiers successfully for new languages in which labeled training data are not available. An adversarial feature adaptation technique is also applied during the model training to reduce distribution mismatch. We conducted experiments on two benchmark CLTC datasets, treating English as the source language and German, French, Japan and Chinese as the unlabeled target languages. The proposed approach had the advantageous or comparable performance of the other state-of-art methods.
Anthology ID:
P17-1130
Volume:
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
July
Year:
2017
Address:
Vancouver, Canada
Editors:
Regina Barzilay, Min-Yen Kan
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1415–1425
Language:
URL:
https://aclanthology.org/P17-1130
DOI:
10.18653/v1/P17-1130
Bibkey:
Cite (ACL):
Ruochen Xu and Yiming Yang. 2017. Cross-lingual Distillation for Text Classification. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1415–1425, Vancouver, Canada. Association for Computational Linguistics.
Cite (Informal):
Cross-lingual Distillation for Text Classification (Xu & Yang, ACL 2017)
Copy Citation:
PDF:
https://aclanthology.org/P17-1130.pdf
Code
 xrc10/cross-distill