Difference between revisions of "POS Tagging (State of the art)"

From ACL Wiki
Jump to: navigation, search
(Adding unknown words accuracies, where available.)
(Clean up citations and performance numbers for Tsuruoka taggers; add current Stanford tagger performance.)
Line 20: Line 20:
 
! All tokens
 
! All tokens
 
! Unknown words
 
! Unknown words
 +
|-
 +
| TnT*
 +
| Hidden markov model
 +
| Brants (2000)
 +
| [http://www.coli.uni-saarland.de/~thorsten/tnt/ TnT]
 +
| 96.46%
 +
| 85.86%
 +
|-
 +
| GENiA Tagger**
 +
| Maximum entropy cyclic dependency network
 +
| Tsuruoka, et al (2005)
 +
| [http://www-tsujii.is.s.u-tokyo.ac.jp/GENIA/tagger/ GENiA]
 +
| 97.05%
 +
| Not available
 
|-
 
|-
 
| Averaged Perceptron
 
| Averaged Perceptron
Line 26: Line 40:
 
| Not available
 
| Not available
 
| 97.11%
 
| 97.11%
 +
| Not available
 +
|-
 +
| Maxent easiest-first
 +
| Maximum entropy bidirectional easiest-first inference
 +
| Tsuruoka and Tsujii (2005)
 +
| [http://www-tsujii.is.s.u-tokyo.ac.jp/~tsuruoka/postagger/ Easiest-first]
 +
| 97.15%
 
| Not available
 
| Not available
 
|-
 
|-
Line 36: Line 57:
 
|-
 
|-
 
| Stanford Tagger 1.0
 
| Stanford Tagger 1.0
| maximum entropy cyclic dependency network
+
| Maximum entropy cyclic dependency network
 
| Toutanova et al. (2003)
 
| Toutanova et al. (2003)
 
| [http://nlp.stanford.edu/software/tagger.shtml Stanford Tagger]
 
| [http://nlp.stanford.edu/software/tagger.shtml Stanford Tagger]
 
| 97.24%
 
| 97.24%
 
| 89.04%
 
| 89.04%
 +
|-
 +
| Stanford Tagger 2.0
 +
| Maximum entropy cyclic dependency network
 +
| [http://nlp.stanford.edu/software/tagger.shtml Stanford Tagger]
 +
| [http://nlp.stanford.edu/software/tagger.shtml Stanford Tagger]
 +
| 97.32%
 +
| 90.79%
 
|-
 
|-
 
| LTAG-spinal
 
| LTAG-spinal
Line 47: Line 75:
 
| [http://www.cis.upenn.edu/~xtag/spinal/ LTAG-spinal]
 
| [http://www.cis.upenn.edu/~xtag/spinal/ LTAG-spinal]
 
| 97.33%
 
| 97.33%
|-
+
| Not available
| GENiA Tagger
+
| ?
+
| Tsuruoka, et al (2005)
+
| [http://www-tsujii.is.s.u-tokyo.ac.jp/GENIA/tagger/ GENiA]
+
| 96.94% on WSJ, 98.26% on biomed.
+
 
|-
 
|-
 
|}
 
|}
 +
 +
(*) TnT: Accuracy is as reported by Giménez and Márquez (2004) for the given test collection. Brants (2000) reports 96.7% token accuracy and 85.5% unknown word accuracy on a 10-fold cross-validation of the Penn WSJ corpus.
 +
 +
(**) GENiA: Results are for models trained and tested on the given corpora (to be comparable to other results).  The distributed GENiA tagger is trained on a mixed training corpus and gets 96.94% on WSJ, and 98.26% on GENiA biomedical English.
  
 
== References ==
 
== References ==
 +
 +
* Brants, Thorsten. 2000. [http://acl.ldc.upenn.edu/A/A00/A00-1031.pdf TnT -- A Statistical Part-of-Speech Tagger]. "6th Applied Natural Language Processing Conference".
  
 
* Collins, Michael. 2002. [http://people.csail.mit.edu/mcollins/papers/tagperc.pdf Discriminative Training Methods for Hidden Markov Models: Theory and Experiments with Perceptron Algorithms]. ''EMNLP 2002''.
 
* Collins, Michael. 2002. [http://people.csail.mit.edu/mcollins/papers/tagperc.pdf Discriminative Training Methods for Hidden Markov Models: Theory and Experiments with Perceptron Algorithms]. ''EMNLP 2002''.

Revision as of 17:23, 2 January 2010

Test collections

  • Performance measure: per token accuracy. (The convention is for this to be measured on all tokens, including punctuation tokens and other unambiguous tokens.)
  • English
    • Penn Treebank Wall Street Journal (WSJ). The splits of data for this data set were not standardized early on (unlike for parsing) and early work uses various data splits defined by counts of tokens or by sections. Most work from 2002 on adopts the following data splits, introduced by Collins (2002):
      • Training data: sections 0-18
      • Development test data: sections 19-21
      • Testing data: sections 22-24


Tables of results

WSJ

System name Short description Main publications Software All tokens Unknown words
TnT* Hidden markov model Brants (2000) TnT 96.46% 85.86%
GENiA Tagger** Maximum entropy cyclic dependency network Tsuruoka, et al (2005) GENiA 97.05% Not available
Averaged Perceptron Averaged Perception discriminative sequence model Collins (2002) Not available 97.11% Not available
Maxent easiest-first Maximum entropy bidirectional easiest-first inference Tsuruoka and Tsujii (2005) Easiest-first 97.15% Not available
SVMTool SVM-based tagger and tagger generator Giménez and Márquez (2004) SVMTool 97.16% 89.01%
Stanford Tagger 1.0 Maximum entropy cyclic dependency network Toutanova et al. (2003) Stanford Tagger 97.24% 89.04%
Stanford Tagger 2.0 Maximum entropy cyclic dependency network Stanford Tagger Stanford Tagger 97.32% 90.79%
LTAG-spinal bidirectional perceptron learning Shen et al. (2007) LTAG-spinal 97.33% Not available

(*) TnT: Accuracy is as reported by Giménez and Márquez (2004) for the given test collection. Brants (2000) reports 96.7% token accuracy and 85.5% unknown word accuracy on a 10-fold cross-validation of the Penn WSJ corpus.

(**) GENiA: Results are for models trained and tested on the given corpora (to be comparable to other results). The distributed GENiA tagger is trained on a mixed training corpus and gets 96.94% on WSJ, and 98.26% on GENiA biomedical English.

References

See also