Difference between revisions of "POS Tagging (State of the art)"

From ACL Wiki
Jump to navigation Jump to search
m
(7 intermediate revisions by 5 users not shown)
Line 6: Line 6:
 
*** '''Development test data:''' sections 19-21
 
*** '''Development test data:''' sections 19-21
 
*** '''Testing data:''' sections 22-24
 
*** '''Testing data:''' sections 22-24
 +
 +
* '''French'''
 +
** '''French TreeBank''' (FTB, Abeillé et al; 2003) ''Le Monde'', December 2007 version, 28-tag tagset (CC tagset, Crabbé and Candito, 2008). Classical data split (10-10-80):
 +
*** '''Training data:''' sentences 2471 to 12351
 +
*** '''Development test data:''' sentences 1236 to 2470
 +
*** '''Testing data:''' sentences 1 to 1235
 +
  
 
== Tables of results ==
 
== Tables of results ==
Line 20: Line 27:
 
! All tokens
 
! All tokens
 
! Unknown words
 
! Unknown words
 +
! License
 
|-
 
|-
 
| TnT*
 
| TnT*
Line 28: Line 36:
 
| 96.46%
 
| 96.46%
 
| 85.86%
 
| 85.86%
 +
| Unknown
 
|-
 
|-
 
| MElt
 
| MElt
Line 36: Line 45:
 
| 96.96%
 
| 96.96%
 
| 91.29%
 
| 91.29%
 +
| Unknown
 
|-
 
|-
 
| GENiA Tagger**
 
| GENiA Tagger**
Line 44: Line 54:
 
| 97.05%
 
| 97.05%
 
| Not available
 
| Not available
 +
| Gratis for non-commercial usage
 
|-
 
|-
 
| Averaged Perceptron
 
| Averaged Perceptron
Line 52: Line 63:
 
| 97.11%
 
| 97.11%
 
| Not available
 
| Not available
 +
| Unknown
 
|-
 
|-
 
| Maxent easiest-first
 
| Maxent easiest-first
Line 60: Line 72:
 
| 97.15%
 
| 97.15%
 
| Not available
 
| Not available
 +
| Unknown
 
|-
 
|-
 
| SVMTool
 
| SVMTool
Line 68: Line 81:
 
| 97.16%
 
| 97.16%
 
| 89.01%
 
| 89.01%
 +
| Unknown
 
|-
 
|-
 
| Morče/COMPOST
 
| Morče/COMPOST
Line 76: Line 90:
 
| 97.23%
 
| 97.23%
 
| Not available
 
| Not available
 +
| Unknown
 
|-
 
|-
 
| Stanford Tagger 1.0
 
| Stanford Tagger 1.0
Line 84: Line 99:
 
| 97.24%
 
| 97.24%
 
| 89.04%
 
| 89.04%
 +
| GPL v2+
 
|-
 
|-
 
| Stanford Tagger 2.0
 
| Stanford Tagger 2.0
Line 92: Line 108:
 
| 97.29%
 
| 97.29%
 
| 89.70%
 
| 89.70%
 +
| GPL v2+
 
|-
 
|-
 
| Stanford Tagger 2.0
 
| Stanford Tagger 2.0
Line 100: Line 117:
 
| 97.32%
 
| 97.32%
 
| 90.79%
 
| 90.79%
 +
| GPL v2+
 
|-
 
|-
 
| LTAG-spinal
 
| LTAG-spinal
Line 108: Line 126:
 
| 97.33%
 
| 97.33%
 
| Not available
 
| Not available
 +
| Unknown
 
|-
 
|-
 
| Morče/COMPOST
 
| Morče/COMPOST
Line 116: Line 135:
 
| 97.44%
 
| 97.44%
 
| Not available
 
| Not available
 +
| Unknown
 
|-
 
|-
 
| SCCN
 
| SCCN
Line 124: Line 144:
 
| 97.50%
 
| 97.50%
 
| Not available
 
| Not available
 +
| Unknown
 
|}
 
|}
  
Line 131: Line 152:
  
 
(***) Extra data: Whether system training exploited (usually large amounts of) extra unlabeled text, such as by semi-supervised learning, self-training, or using distributional similarity features, beyond the standard supervised training data.
 
(***) Extra data: Whether system training exploited (usually large amounts of) extra unlabeled text, such as by semi-supervised learning, self-training, or using distributional similarity features, beyond the standard supervised training data.
 +
 +
===FTB===
 +
 +
{| border="1" cellpadding="5" cellspacing="1" width="100%"
 +
|-
 +
! System name
 +
! Short description
 +
! Main publication
 +
! Software
 +
! Extra Data?***
 +
! All tokens
 +
! Unknown words
 +
|-
 +
| Morfette
 +
| Perceptron with external lexical information*
 +
| Chrupała et al. (2008), Seddah et al. (2010)
 +
| [http://sites.google.com/site/morfetteweb/ Morfette]
 +
| No
 +
| 97.68%
 +
| 90.52%
 +
|-
 +
| SEM
 +
| CRF with external lexical information*
 +
| Constant et al. (2011)
 +
| [http://www.univ-orleans.fr/lifo/Members/Isabelle.Tellier/SEM.html SEM]
 +
| No
 +
| 97.7%
 +
| Not available
 +
|-
 +
| MElt
 +
| MEMM with external lexical information*
 +
| Denis and Sagot (2009)
 +
| [https://gforge.inria.fr/projects/lingwb/ Alpage linguistic workbench]
 +
| No
 +
| 97.80%
 +
| 91.77%
 +
|}
 +
 +
(*) External lexical information from the Lefff lexicon (Sagot 2010, [https://gforge.inria.fr/frs/?group_id=482 Alexina project])
 +
  
 
== References ==
 
== References ==
  
 
* Brants, Thorsten. 2000. [http://acl.ldc.upenn.edu/A/A00/A00-1031.pdf TnT -- A Statistical Part-of-Speech Tagger]. "6th Applied Natural Language Processing Conference".
 
* Brants, Thorsten. 2000. [http://acl.ldc.upenn.edu/A/A00/A00-1031.pdf TnT -- A Statistical Part-of-Speech Tagger]. "6th Applied Natural Language Processing Conference".
 +
 +
* Chrupała, Grzegorz, Dinu, Georgiana and van Genabith, Josef. 2008. [http://www.lrec-conf.org/proceedings/lrec2008/pdf/594_paper.pdf Learning Morphology with Morfette]. "LREC 2008".
  
 
* Collins, Michael. 2002. [http://people.csail.mit.edu/mcollins/papers/tagperc.pdf Discriminative Training Methods for Hidden Markov Models: Theory and Experiments with Perceptron Algorithms]. ''EMNLP 2002''.
 
* Collins, Michael. 2002. [http://people.csail.mit.edu/mcollins/papers/tagperc.pdf Discriminative Training Methods for Hidden Markov Models: Theory and Experiments with Perceptron Algorithms]. ''EMNLP 2002''.
 +
 +
* Constant, Matthieu, Tellier, Isabelle, Duchier, Denys, Dupont, Yoann, Sigogne, Anthony, and Billot, Sylvie. [http://www.lirmm.fr/~lopez/TALN2011/Longs-TALN+RECITAL/Tellier_taln11_submission_54.pdf Intégrer des connaissances linguistiques dans un CRF : application à l'apprentissage d'un segmenteur-étiqueteur du français]. "TALN'11"
  
 
* Denis, Pascal and Sagot, Benoît. 2009. [http://alpage.inria.fr/~sagot/pub/paclic09tagging.pdf Coupling an annotated corpus and a morphosyntactic lexicon for state-of-the-art POS tagging with less human effort]. "PACLIC 2009"
 
* Denis, Pascal and Sagot, Benoît. 2009. [http://alpage.inria.fr/~sagot/pub/paclic09tagging.pdf Coupling an annotated corpus and a morphosyntactic lexicon for state-of-the-art POS tagging with less human effort]. "PACLIC 2009"
Line 143: Line 208:
  
 
* Manning, Christopher D. 2011. Part-of-Speech Tagging from 97% to 100%: Is It Time for Some Linguistics? In Alexander Gelbukh (ed.), Computational Linguistics and Intelligent Text Processing, 12th International Conference, CICLing 2011, Proceedings, Part I. Lecture Notes in Computer Science 6608, pp. 171--189. Springer.
 
* Manning, Christopher D. 2011. Part-of-Speech Tagging from 97% to 100%: Is It Time for Some Linguistics? In Alexander Gelbukh (ed.), Computational Linguistics and Intelligent Text Processing, 12th International Conference, CICLing 2011, Proceedings, Part I. Lecture Notes in Computer Science 6608, pp. 171--189. Springer.
 +
 +
* Seddah, Djamé, Chrupała, Grzegorz, Çetinoglu, Özlem and Candito, Marie. 2010. [http://aclweb.org/anthology-new/W/W10/W10-1410.pdf Lemmatization and Lexicalized Statistical Parsing of Morphologically Rich Languages: the Case of French] "SPMRL 2010 (NAACL 2010 workshop)"
  
 
* Shen, L., Satta, G., and  Joshi, A. 2007. [http://acl.ldc.upenn.edu/P/P07/P07-1096.pdf Guided learning for bidirectional sequence classification]. ''Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics (ACL 2007)'', pages 760-767.
 
* Shen, L., Satta, G., and  Joshi, A. 2007. [http://acl.ldc.upenn.edu/P/P07/P07-1096.pdf Guided learning for bidirectional sequence classification]. ''Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics (ACL 2007)'', pages 760-767.

Revision as of 06:12, 2 July 2012

Test collections

  • Performance measure: per token accuracy. (The convention is for this to be measured on all tokens, including punctuation tokens and other unambiguous tokens.)
  • English
    • Penn Treebank Wall Street Journal (WSJ) release 3 (LDC99T42). The splits of data for this task were not standardized early on (unlike for parsing) and early work uses various data splits defined by counts of tokens or by sections. Most work from 2002 on adopts the following data splits, introduced by Collins (2002):
      • Training data: sections 0-18
      • Development test data: sections 19-21
      • Testing data: sections 22-24
  • French
    • French TreeBank (FTB, Abeillé et al; 2003) Le Monde, December 2007 version, 28-tag tagset (CC tagset, Crabbé and Candito, 2008). Classical data split (10-10-80):
      • Training data: sentences 2471 to 12351
      • Development test data: sentences 1236 to 2470
      • Testing data: sentences 1 to 1235


Tables of results

WSJ

System name Short description Main publication Software Extra Data?*** All tokens Unknown words License
TnT* Hidden markov model Brants (2000) TnT No 96.46% 85.86% Unknown
MElt MEMM with external lexical information Denis and Sagot (2009) Alpage linguistic workbench No 96.96% 91.29% Unknown
GENiA Tagger** Maximum entropy cyclic dependency network Tsuruoka, et al (2005) GENiA No 97.05% Not available Gratis for non-commercial usage
Averaged Perceptron Averaged Perception discriminative sequence model Collins (2002) Not available No 97.11% Not available Unknown
Maxent easiest-first Maximum entropy bidirectional easiest-first inference Tsuruoka and Tsujii (2005) Easiest-first No 97.15% Not available Unknown
SVMTool SVM-based tagger and tagger generator Giménez and Márquez (2004) SVMTool No 97.16% 89.01% Unknown
Morče/COMPOST Averaged Perceptron Spoustová et al. (2009) [1] No 97.23% Not available Unknown
Stanford Tagger 1.0 Maximum entropy cyclic dependency network Toutanova et al. (2003) Stanford Tagger No 97.24% 89.04% GPL v2+
Stanford Tagger 2.0 Maximum entropy cyclic dependency network Manning (2011) Stanford Tagger No 97.29% 89.70% GPL v2+
Stanford Tagger 2.0 Maximum entropy cyclic dependency network Manning (2011) Stanford Tagger Yes 97.32% 90.79% GPL v2+
LTAG-spinal Bidirectional perceptron learning Shen et al. (2007) LTAG-spinal No 97.33% Not available Unknown
Morče/COMPOST Averaged Perceptron Spoustová et al. (2009) [2] Yes 97.44% Not available Unknown
SCCN Semi-supervised condensed nearest neighbor Søgaard (2011) SCCN Yes 97.50% Not available Unknown

(*) TnT: Accuracy is as reported by Giménez and Márquez (2004) for the given test collection. Brants (2000) reports 96.7% token accuracy and 85.5% unknown word accuracy on a 10-fold cross-validation of the Penn WSJ corpus.

(**) GENiA: Results are for models trained and tested on the given corpora (to be comparable to other results). The distributed GENiA tagger is trained on a mixed training corpus and gets 96.94% on WSJ, and 98.26% on GENiA biomedical English.

(***) Extra data: Whether system training exploited (usually large amounts of) extra unlabeled text, such as by semi-supervised learning, self-training, or using distributional similarity features, beyond the standard supervised training data.

FTB

System name Short description Main publication Software Extra Data?*** All tokens Unknown words
Morfette Perceptron with external lexical information* Chrupała et al. (2008), Seddah et al. (2010) Morfette No 97.68% 90.52%
SEM CRF with external lexical information* Constant et al. (2011) SEM No 97.7% Not available
MElt MEMM with external lexical information* Denis and Sagot (2009) Alpage linguistic workbench No 97.80% 91.77%

(*) External lexical information from the Lefff lexicon (Sagot 2010, Alexina project)


References

  • Manning, Christopher D. 2011. Part-of-Speech Tagging from 97% to 100%: Is It Time for Some Linguistics? In Alexander Gelbukh (ed.), Computational Linguistics and Intelligent Text Processing, 12th International Conference, CICLing 2011, Proceedings, Part I. Lecture Notes in Computer Science 6608, pp. 171--189. Springer.
  • Søgaard, Anders. 2011. Semi-supervised condensed nearest neighbor for part-of-speech tagging. The 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies (ACL-HLT). Portland, Oregon.
  • Spoustová, Drahomíra "Johanka", Jan Hajič, Jan Raab and Miroslav Spousta. 2009. Semi-supervised Training for the Averaged Perceptron POS Tagger. Proceedings of the 12 EACL, pages 763-771.

See also