Difference between revisions of "POS Tagging (State of the art)"
Jump to navigation
Jump to search
(→WSJ) |
|||
Line 2: | Line 2: | ||
* '''Performance measure:''' per token accuracy. (The convention is for this to be measured on all tokens, including punctuation tokens and other unambiguous tokens.) | * '''Performance measure:''' per token accuracy. (The convention is for this to be measured on all tokens, including punctuation tokens and other unambiguous tokens.) | ||
* '''English''' | * '''English''' | ||
− | ** '''Penn Treebank''' ''Wall Street Journal'' (WSJ). The splits of data for this | + | ** '''Penn Treebank''' ''Wall Street Journal'' (WSJ) release 3 (LDC99T42). The splits of data for this task were not standardized early on (unlike for parsing) and early work uses various data splits defined by counts of tokens or by sections. Most work from 2002 on adopts the following data splits, introduced by Collins (2002): |
*** '''Training data:''' sections 0-18 | *** '''Training data:''' sections 0-18 | ||
*** '''Development test data:''' sections 19-21 | *** '''Development test data:''' sections 19-21 | ||
*** '''Testing data:''' sections 22-24 | *** '''Testing data:''' sections 22-24 | ||
− | |||
== Tables of results == | == Tables of results == |
Revision as of 09:46, 13 September 2011
Test collections
- Performance measure: per token accuracy. (The convention is for this to be measured on all tokens, including punctuation tokens and other unambiguous tokens.)
- English
- Penn Treebank Wall Street Journal (WSJ) release 3 (LDC99T42). The splits of data for this task were not standardized early on (unlike for parsing) and early work uses various data splits defined by counts of tokens or by sections. Most work from 2002 on adopts the following data splits, introduced by Collins (2002):
- Training data: sections 0-18
- Development test data: sections 19-21
- Testing data: sections 22-24
- Penn Treebank Wall Street Journal (WSJ) release 3 (LDC99T42). The splits of data for this task were not standardized early on (unlike for parsing) and early work uses various data splits defined by counts of tokens or by sections. Most work from 2002 on adopts the following data splits, introduced by Collins (2002):
Tables of results
WSJ
System name | Short description | Main publication | Software | Extra Data?*** | All tokens | Unknown words |
---|---|---|---|---|---|---|
TnT* | Hidden markov model | Brants (2000) | TnT | No | 96.46% | 85.86% |
GENiA Tagger** | Maximum entropy cyclic dependency network | Tsuruoka, et al (2005) | GENiA | No | 97.05% | Not available |
Averaged Perceptron | Averaged Perception discriminative sequence model | Collins (2002) | Not available | No | 97.11% | Not available |
Maxent easiest-first | Maximum entropy bidirectional easiest-first inference | Tsuruoka and Tsujii (2005) | Easiest-first | No | 97.15% | Not available |
SVMTool | SVM-based tagger and tagger generator | Giménez and Márquez (2004) | SVMTool | No | 97.16% | 89.01% |
Morče/COMPOST | Averaged Perceptron | Spoustová et al. (2009) | [1] | No | 97.23% | Not available |
Stanford Tagger 1.0 | Maximum entropy cyclic dependency network | Toutanova et al. (2003) | Stanford Tagger | No | 97.24% | 89.04% |
Stanford Tagger 2.0 | Maximum entropy cyclic dependency network | Manning (2011) | Stanford Tagger | No | 97.29% | 89.70% |
Stanford Tagger 2.0 | Maximum entropy cyclic dependency network | Manning (2011) | Stanford Tagger | Yes | 97.32% | 90.79% |
LTAG-spinal | Bidirectional perceptron learning | Shen et al. (2007) | LTAG-spinal | No | 97.33% | Not available |
Morče/COMPOST | Averaged Perceptron | Spoustová et al. (2009) | [2] | Yes | 97.44% | Not available |
SCCN | Semi-supervised condensed nearest neighbor | Søgaard (2011) | SCCN | Yes | 97.50% | Not available |
(*) TnT: Accuracy is as reported by Giménez and Márquez (2004) for the given test collection. Brants (2000) reports 96.7% token accuracy and 85.5% unknown word accuracy on a 10-fold cross-validation of the Penn WSJ corpus.
(**) GENiA: Results are for models trained and tested on the given corpora (to be comparable to other results). The distributed GENiA tagger is trained on a mixed training corpus and gets 96.94% on WSJ, and 98.26% on GENiA biomedical English.
(***) Extra data: Whether system training exploited (usually large amounts of) extra unlabeled text, such as by semi-supervised learning, self-training, or using distributional similarity features, beyond the standard supervised training data.
References
- Brants, Thorsten. 2000. TnT -- A Statistical Part-of-Speech Tagger. "6th Applied Natural Language Processing Conference".
- Collins, Michael. 2002. Discriminative Training Methods for Hidden Markov Models: Theory and Experiments with Perceptron Algorithms. EMNLP 2002.
- Giménez, J., and Márquez, L. 2004. SVMTool: A general POS tagger generator based on Support Vector Machines. Proceedings of the 4th International Conference on Language Resources and Evaluation (LREC'04). Lisbon, Portugal.
- Shen, L., Satta, G., and Joshi, A. 2007. Guided learning for bidirectional sequence classification. Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics (ACL 2007), pages 760-767.
- Søgaard, Anders. 2011. Semi-supervised condensed nearest neighbor for part-of-speech tagging. The 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies (ACL-HLT). Portland, Oregon
- Toutanova, K., Klein, D., Manning, C.D., Yoram Singer, Y. 2003. Feature-rich part-of-speech tagging with a cyclic dependency network. Proceedings of HLT-NAACL 2003, pages 252-259.
- Tsuruoka, Yoshimasa, Yuka Tateishi, Jin-Dong Kim, Tomoko Ohta, John McNaught, Sophia Ananiadou, and Jun'ichi Tsujii. 2005. "Developing a Robust Part-of-Speech Tagger for Biomedical Text, Advances in Informatics" - 10th Panhellenic Conference on Informatics, LNCS 3746, pp. 382-392, 2005
- Tsuruoka, Yoshimasa and Jun'ichi Tsujii. 2005. "Bidirectional Inference with the Easiest-First Strategy for Tagging Sequence Data", Proceedings of HLT/EMNLP 2005, pp. 467-474.