NP Chunking (State of the art)

From ACLWiki
(Difference between revisions)
Jump to: navigation, search
Line 1: Line 1:
 
== "Standard" measure: ==
 
== "Standard" measure: ==
 +
The performance of the algorithm is measured with two scores: precision and recall. Precision measures how many NPs found by the algorithm are correct and the recall rate contains the percentage of NPs defined in the corpus that were found by the chunking program.
 +
 +
The two rates can be combined in one measure: the F rate in which F = 2*precision*recall / (recall+precision)
  
 
== "Standard" datasets: ==
 
== "Standard" datasets: ==
 +
The original data of the NP chunking experiments by Lance Ramshaw and Mitch Marcus. The data contains one word per line and each line contains six fields of which only the first three fields are relevant: the word, the part-of-speech tag assigned by the Brill tagger and the correct IOB tag.
 +
 +
The standard data set put forward by Ramshaw and Marcus consists of sections 15-18 of the Wall Street Journal corpus as training material and section 20 of that corpus as test material.
 +
 +
Dataset is available from [ftp://ftp.cis.upenn.edu/pub/chunker/].
 +
 +
 +
== More information: ==
 +
See here: [http://ifarm.nl/erikt/research/np-chunking.html]
  
  
 
{{StateOfTheArtTable}}
 
{{StateOfTheArtTable}}
  
| SystemName || How does it work? || [Article|http://www.bla.com || Software? || 98% according to... || Any extra comments?
 
|-
 
  
 +
| KM00 || B-I-O tagging using SVM classifiers with polynomial kernel || KM00 [http://citeseer.comp.nus.edu.sg/rd/0%2C394415%2C1%2C0.25%2CDownload/http://citeseer.comp.nus.edu.sg/cache/papers/cs/18905/http:zSzzSzlcg-www.uia.ac.bezSzconll2000zSzpszSz14244kud.pdf/kudoh00use.pdf] || YAMCHA Toolkit [http://chasen.org/~taku/software/yamcha/] (but models are not provided) ||  F: 93.79 ||  ||
 +
|-
 +
| KM01 || Learning like in KM00, but voting between different representation. || KM01 [http://cactus.aist-nara.ac.jp/~taku-ku/publications/naacl2001.pdf] || No. || F: 94.22 ||  ||
 +
|-
 +
| --- || Specialized HMM + voting between different representation. || Sarkar2005 [http://www.cs.sfu.ca/~anoop/papers/pdf/ai05.pdf] || No. || F: 95.23 ||  ||
 
|-
 
|-
 
 
|}
 
|}
  
 +
* KM00 -  Taku Kudo and Yuji Matsumoto. 2000b. Use of Support Vector Learning for Chunk Identification. In Proceedings of the 4th Conference on CoNLL-2000 and LLL-2000.
 +
* KM01 - Taku Kudo and Yuji Matsumoto. Chunking with support vector machines. In NAACL-2001
 +
* Sarkar2005 - Hong Shen and Anoop Sarkar. Voting between Multiple Data Representations for Text Chunking. In proceedings of the Eighteenth Meeting of the Canadian Society for Computational Intelligence, Canadian AI 2005.
  
 
[[Category:State Of The Art]]
 
[[Category:State Of The Art]]

Revision as of 10:55, 18 June 2007

"Standard" measure:

The performance of the algorithm is measured with two scores: precision and recall. Precision measures how many NPs found by the algorithm are correct and the recall rate contains the percentage of NPs defined in the corpus that were found by the chunking program.

The two rates can be combined in one measure: the F rate in which F = 2*precision*recall / (recall+precision)

"Standard" datasets:

The original data of the NP chunking experiments by Lance Ramshaw and Mitch Marcus. The data contains one word per line and each line contains six fields of which only the first three fields are relevant: the word, the part-of-speech tag assigned by the Brill tagger and the correct IOB tag.

The standard data set put forward by Ramshaw and Marcus consists of sections 15-18 of the Wall Street Journal corpus as training material and section 20 of that corpus as test material.

Dataset is available from [1].


More information:

See here: [2]


System Name Short Description Main Publications Software (if available) Results Comments (i.e. extra resources used, train/test times, ...)
KM00 B-I-O tagging using SVM classifiers with polynomial kernel KM00 [3] YAMCHA Toolkit [4] (but models are not provided) F: 93.79
KM01 Learning like in KM00, but voting between different representation. KM01 [5] No. F: 94.22
--- Specialized HMM + voting between different representation. Sarkar2005 [6] No. F: 95.23
  • KM00 - Taku Kudo and Yuji Matsumoto. 2000b. Use of Support Vector Learning for Chunk Identification. In Proceedings of the 4th Conference on CoNLL-2000 and LLL-2000.
  • KM01 - Taku Kudo and Yuji Matsumoto. Chunking with support vector machines. In NAACL-2001
  • Sarkar2005 - Hong Shen and Anoop Sarkar. Voting between Multiple Data Representations for Text Chunking. In proceedings of the Eighteenth Meeting of the Canadian Society for Computational Intelligence, Canadian AI 2005.
Personal tools