A Multi-lingual Multi-task Architecture for Low-resource Sequence Labeling

Ying Lin, Shengqi Yang, Veselin Stoyanov, Heng Ji


Abstract
We propose a multi-lingual multi-task architecture to develop supervised models with a minimal amount of labeled data for sequence labeling. In this new architecture, we combine various transfer models using two layers of parameter sharing. On the first layer, we construct the basis of the architecture to provide universal word representation and feature extraction capability for all models. On the second level, we adopt different parameter sharing strategies for different transfer schemes. This architecture proves to be particularly effective for low-resource settings, when there are less than 200 training sentences for the target task. Using Name Tagging as a target task, our approach achieved 4.3%-50.5% absolute F-score gains compared to the mono-lingual single-task baseline model.
Anthology ID:
P18-1074
Volume:
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
July
Year:
2018
Address:
Melbourne, Australia
Editors:
Iryna Gurevych, Yusuke Miyao
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
799–809
Language:
URL:
https://aclanthology.org/P18-1074
DOI:
10.18653/v1/P18-1074
Bibkey:
Cite (ACL):
Ying Lin, Shengqi Yang, Veselin Stoyanov, and Heng Ji. 2018. A Multi-lingual Multi-task Architecture for Low-resource Sequence Labeling. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 799–809, Melbourne, Australia. Association for Computational Linguistics.
Cite (Informal):
A Multi-lingual Multi-task Architecture for Low-resource Sequence Labeling (Lin et al., ACL 2018)
Copy Citation:
PDF:
https://aclanthology.org/P18-1074.pdf
Presentation:
 P18-1074.Presentation.pdf
Video:
 https://aclanthology.org/P18-1074.mp4
Code
 limteng-rpi/mlmt