Reinforced Co-Training

Jiawei Wu, Lei Li, William Yang Wang


Abstract
Co-training is a popular semi-supervised learning framework to utilize a large amount of unlabeled data in addition to a small labeled set. Co-training methods exploit predicted labels on the unlabeled data and select samples based on prediction confidence to augment the training. However, the selection of samples in existing co-training methods is based on a predetermined policy, which ignores the sampling bias between the unlabeled and the labeled subsets, and fails to explore the data space. In this paper, we propose a novel method, Reinforced Co-Training, to select high-quality unlabeled samples to better co-train on. More specifically, our approach uses Q-learning to learn a data selection policy with a small labeled dataset, and then exploits this policy to train the co-training classifiers automatically. Experimental results on clickbait detection and generic text classification tasks demonstrate that our proposed method can obtain more accurate text classification results.
Anthology ID:
N18-1113
Volume:
Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)
Month:
June
Year:
2018
Address:
New Orleans, Louisiana
Editors:
Marilyn Walker, Heng Ji, Amanda Stent
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1252–1262
Language:
URL:
https://aclanthology.org/N18-1113
DOI:
10.18653/v1/N18-1113
Bibkey:
Cite (ACL):
Jiawei Wu, Lei Li, and William Yang Wang. 2018. Reinforced Co-Training. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1252–1262, New Orleans, Louisiana. Association for Computational Linguistics.
Cite (Informal):
Reinforced Co-Training (Wu et al., NAACL 2018)
Copy Citation:
PDF:
https://aclanthology.org/N18-1113.pdf