Difference between revisions of "Textual Entailment Resource Pool"
Complinger (talk | contribs) |
Complinger (talk | contribs) |
||
Line 9: | Line 9: | ||
== Complete RTE Systems == | == Complete RTE Systems == | ||
− | * [http://project.cgm.unive.it VENSES] (from Ca' Foscari University of Venice, Italy) | + | * [http://project.cgm.unive.it/html/venses.html VENSES] (from Ca' Foscari University of Venice, Italy) |
* [http://svn.ask.it.usyd.edu.au/trac/candc/wiki/nutcracker Nutcracker] (available for download) | * [http://svn.ask.it.usyd.edu.au/trac/candc/wiki/nutcracker Nutcracker] (available for download) | ||
* [http://l2r.cs.uiuc.edu/~cogcomp/kindleDemo.php Entailment Demo] (from the University of Illinois at Urbana-Champaign) | * [http://l2r.cs.uiuc.edu/~cogcomp/kindleDemo.php Entailment Demo] (from the University of Illinois at Urbana-Champaign) |
Revision as of 10:27, 3 February 2009
Textual entailment systems rely on many different types of NLP resources, including term banks, paraphrase lists, parsers, named-entity recognizers, etc. With so many resources being continuously released and improved, it can be difficult to know which particular resource to use when developing a system.
In response, the Recognizing Textual Entailment (RTE) shared task community initiated a new activity for building this Textual Entailment Resource Pool. RTE participants and any other member of the NLP community are encouraged to contribute to the pool.
In an effort to determine the relative impact of the resources, RTE participants are strongly encouraged to report, whenever possible, the contribution to the overall performance of each utilized resource. Formal qualitative and quantitative results should be included in a separate section of the system report as well as posted on the talk pages of this Textual Entailment Resource Pool.
Adding a new resource is very easy. See how to use existing templates to do this in Help:Using Templates.
Complete RTE Systems
- VENSES (from Ca' Foscari University of Venice, Italy)
- Nutcracker (available for download)
- Entailment Demo (from the University of Illinois at Urbana-Champaign)
Resources
RTE data sets
- FrameNet manually annotated RTE 2006 Test Set. Provided by SALSA project, Saarland University.
- Manually Word Aligned RTE 2006 Data Sets. Provided by the Natural Language Processing Group, Microsoft Research.
- Microsoft Research Paraphrase Corpus.
- RTE data sets annotated for a 3-way decision: entails, contradicts, unknown. Provided by Stanford NLP Group.
- BPI RTE data set - 250 pairs, focusing on world knowledge. Provided jointly by Boeing, Princeton, and ISI.
Linguistic Knowledge
- DIRT Paraphrase Collection
- FrameNet
- Sekine's Paraphrase Database
- TEASE Entailment Rule Collection
- VerbOcean
- WordNet
Tools
Parsers
- C&C parser for Combinatory Categorial Grammar
- Minipar
- Shallow Parser - from the University of Illinois at Urbana-Champaign, see a web demo of this tool
Role Labelling
- ASSERT
- Shalmaneser
- Semantic Role Labeler - from the University of Illinois at Urbana-Champaign, see a web demo of this tool
Entity Recognition Tools
- CCG Named Entity Tagger - see a web demo of this tool
- CCG Multi-lingual Named Entity Discovery Tool - see a web demo of this tool
Corpus Readers
- NLTK provides a corpus reader for the data from RTE Challenges 1, 2, and 3 - see the Corpus Readers Guide for more information.