Predicting proficiency levels in learner writings by transferring a linguistic complexity model from expert-written coursebooks

Ildikó Pilán, Elena Volodina, Torsten Zesch


Abstract
The lack of a sufficient amount of data tailored for a task is a well-recognized problem for many statistical NLP methods. In this paper, we explore whether data sparsity can be successfully tackled when classifying language proficiency levels in the domain of learner-written output texts. We aim at overcoming data sparsity by incorporating knowledge in the trained model from another domain consisting of input texts written by teaching professionals for learners. We compare different domain adaptation techniques and find that a weighted combination of the two types of data performs best, which can even rival systems based on considerably larger amounts of in-domain data. Moreover, we show that normalizing errors in learners’ texts can substantially improve classification when level-annotated in-domain data is not available.
Anthology ID:
C16-1198
Volume:
Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers
Month:
December
Year:
2016
Address:
Osaka, Japan
Editors:
Yuji Matsumoto, Rashmi Prasad
Venue:
COLING
SIG:
Publisher:
The COLING 2016 Organizing Committee
Note:
Pages:
2101–2111
Language:
URL:
https://aclanthology.org/C16-1198
DOI:
Bibkey:
Cite (ACL):
Ildikó Pilán, Elena Volodina, and Torsten Zesch. 2016. Predicting proficiency levels in learner writings by transferring a linguistic complexity model from expert-written coursebooks. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 2101–2111, Osaka, Japan. The COLING 2016 Organizing Committee.
Cite (Informal):
Predicting proficiency levels in learner writings by transferring a linguistic complexity model from expert-written coursebooks (Pilán et al., COLING 2016)
Copy Citation:
PDF:
https://aclanthology.org/C16-1198.pdf