State of the art
The purpose of this section of the ACL wiki is to be a repository of k-best state-of-the-art results (i.e., methods and software) for various core natural language processing tasks.
As a side effect, this should hopefully evolve into a knowledge base of standard evaluation methods and datasets for various tasks, as well as encourage more effort into reproducibility of results. This will help newcomers to a field appreciate what has been done so far and what the main tasks are, and will help keep active researchers informed on fields other than their specific research. The next time you need a system for PP attachment, or wonder what is the current state of word sense disambiguation, this will be the place to visit.
Please contribute! (This is also a good place for you to display your results!)
As a historical point of reference, you may want to refer to the Survey of the State of the Art in Human Language Technology (also available as PDF), edited by R. Cole, J. Mariani, H. Uszkoreit, G. B. Varile, A. Zaenen, A. Zampolli, V. Zue, 1996.
- Analogy -- SAT, SemEval-2012 Task 2, Syntactic Analogies, Google analogy test set, Bigger analogy test set
- Anaphora Resolution (stub)
- Automatic Text Summarization (stub)
- Chunking (stub)
- Dependency Parsing (stub)
- Document Classification (stub)
- Language Identification (stub)
- Named Entity Recognition
- Noun-Modifier Semantic Relations
- NP Chunking
- Paraphrase Identification
- Parsing
- POS Induction
- POS Tagging
- PP Attachment (stub)
- Question Answering
- Semantic Role Labeling (stub)
- Sentiment Analysis (stub)
- Similarity -- ESL, SAT, TOEFL, RG-65 Test Collection, MC-28 Test Collection, SimLex-999 Similarity Test Collection, WordSimilarity-353, SemEval-2012 Task 2, MEN Test Collection
- Speech Recognition (article request)
- Temporal Information Extraction
- Web Corpus Cleaning (stub)
- Word Segmentation (stub)
- Word Sense Disambiguation (stub)