Difference between revisions of "MUC-7 (State of the art)"

From ACL Wiki
Jump to navigation Jump to search
Line 15: Line 15:
 
! System name
 
! System name
 
! Short description
 
! Short description
 +
! System type
 
! Main publications
 
! Main publications
 
! Software
 
! Software
 
! Results (F)
 
! Results (F)
 
|-
 
|-
| Human
+
| Annotator
 
| Human annotator
 
| Human annotator
 +
| -
 
| [http://www.itl.nist.gov/iad/894.02/related_projects/muc/proceedings/muc_7_toc.html MUC-7 proceedings]
 
| [http://www.itl.nist.gov/iad/894.02/related_projects/muc/proceedings/muc_7_toc.html MUC-7 proceedings]
|  
+
| -
 
| 97.60%
 
| 97.60%
 
|-
 
|-
 
| LTG
 
| LTG
 
| Best MUC-7 participant
 
| Best MUC-7 participant
 +
| H
 
| Mikheev, Grover and Moens (1998)
 
| Mikheev, Grover and Moens (1998)
|  
+
| -
 
| 93.39%
 
| 93.39%
 +
|-
 +
| Baseline
 +
| Vocabulary transfer from training to testing
 +
| S
 +
| Whitelaw and Patrick (2003)
 +
| -
 +
| 58.89%
 
|-
 
|-
 
|}
 
|}
 +
 +
* '''System type''': R = hand-crafted rules, S = supervised learning, U = unsupervised learning, H = hybrid
  
  
 
== References ==
 
== References ==
  
Mikheev, A., Grover, C., and Moens, M. (1998). [http://www-nlpir.nist.gov/related_projects/muc/proceedings/muc_7_proceedings/ltg_muc7.pdf Description of the LTG system used for MUC-7]. ''Proceedings of the Seventh Message Understanding Conference (MUC-7)''. Fairfax, Virginia.
+
Mikheev, A., Grover, C. and Moens, M. (1998). [http://www-nlpir.nist.gov/related_projects/muc/proceedings/muc_7_proceedings/ltg_muc7.pdf Description of the LTG system used for MUC-7]. ''Proceedings of the Seventh Message Understanding Conference (MUC-7)''. Fairfax, Virginia.
 +
 
 +
Whitelaw, C. and Patrick, J. (2003) [http://www.springerlink.com/content/ju66c6a2734fl20u/ Evaluating Corpora for Named Entity Recognition Using Character-Level Features]. ''Proceeding of the 16th Australian Conference on AI''. Perth, Australia.  
  
 
== See also ==
 
== See also ==

Revision as of 12:35, 31 July 2007

  • Performance measure: F = 2 * Precision * Recall / (Recall + Precision)
  • Precision: percentage of named entities found by the algorithm that are correct
  • Recall: percentage of named entities defined in the corpus that were found by the program
  • Exact calculation of precision and recall is explained in the MUC scoring software
  • Training data: Training section of MUC-7 dataset
  • Testing data: Formal section of MUC-7 dataset


Table of results

System name Short description System type Main publications Software Results (F)
Annotator Human annotator - MUC-7 proceedings - 97.60%
LTG Best MUC-7 participant H Mikheev, Grover and Moens (1998) - 93.39%
Baseline Vocabulary transfer from training to testing S Whitelaw and Patrick (2003) - 58.89%
  • System type: R = hand-crafted rules, S = supervised learning, U = unsupervised learning, H = hybrid


References

Mikheev, A., Grover, C. and Moens, M. (1998). Description of the LTG system used for MUC-7. Proceedings of the Seventh Message Understanding Conference (MUC-7). Fairfax, Virginia.

Whitelaw, C. and Patrick, J. (2003) Evaluating Corpora for Named Entity Recognition Using Character-Level Features. Proceeding of the 16th Australian Conference on AI. Perth, Australia.

See also