POS Tagging (State of the art)

From ACL Wiki
Revision as of 12:37, 2 January 2010 by ChristopherManning (talk | contribs) (Adding unknown words accuracies, where available.)

Jump to: navigation, search

Test collections

  • Performance measure: per token accuracy. (The convention is for this to be measured on all tokens, including punctuation tokens and other unambiguous tokens.)
  • English
    • Penn Treebank Wall Street Journal (WSJ). The splits of data for this data set were not standardized early on (unlike for parsing) and early work uses various data splits defined by counts of tokens or by sections. Most work from 2002 on adopts the following data splits, introduced by Collins (2002):
      • Training data: sections 0-18
      • Development test data: sections 19-21
      • Testing data: sections 22-24

Tables of results


System name Short description Main publications Software All tokens Unknown words
Averaged Perceptron Averaged Perception discriminative sequence model Collins (2002) Not available 97.11% Not available
SVMTool SVM-based tagger and tagger generator Giménez and Márquez (2004) SVMTool 97.16% 89.01%
Stanford Tagger 1.0 maximum entropy cyclic dependency network Toutanova et al. (2003) Stanford Tagger 97.24% 89.04%
LTAG-spinal bidirectional perceptron learning Shen et al. (2007) LTAG-spinal 97.33%
GENiA Tagger  ? Tsuruoka, et al (2005) GENiA 96.94% on WSJ, 98.26% on biomed.


See also