Difference between revisions of "WordNet - RTE Users"

From ACL Wiki
Jump to navigation Jump to search
m
Line 11: Line 11:
 
| RTE3
 
| RTE3
 
| 3.0
 
| 3.0
| Semantic relation between words.
+
| Semantic relation between words
| no evaluation of the resource
+
| No evaluation of the resource
 +
|- bgcolor="#ECECEC" align="left"
 +
| AUEB
 +
| RTE3
 +
| 2.1
 +
| Synonymy resolution
 +
| Replacing the words of H with their synonyms in T: on RTE3 data sets 2% improvement
 
|- bgcolor="#ECECEC" align="left"
 
|- bgcolor="#ECECEC" align="left"
 
| Boeing
 
| Boeing
 
| RTE4
 
| RTE4
 
| 2.0
 
| 2.0
| Semantic relation between words.
+
| Semantic relation between words  
 
| No formal evaluation. Plays a role in most entailments found.  
 
| No formal evaluation. Plays a role in most entailments found.  
 +
|- bgcolor="#ECECEC" align="left"
 +
| DFKI
 +
| RTE4
 +
| 3.0
 +
| Semantic relation between words
 +
| No separate evaluation
 +
|- bgcolor="#ECECEC" align="left"
 +
| CERES
 +
| RTE4
 +
| 3.0
 +
| Hypernyms, antonyms, indexWords(N,V,Adj,Adv)
 +
| Used, but no evaluation performed
 +
|- bgcolor="#ECECEC" align="left"
 +
| Cambridge
 +
| RTE4
 +
| 3.0
 +
| Meaning postulates from WordNet noun hyponymy, e.g. forall x: cat(x) -> animal(x)
 +
| No systematic evaluation
 +
|- bgcolor="#ECECEC" align="left"
 +
| BIU
 +
| RTE4
 +
| 3.0
 +
| Derived on the fly lexical entailment rules, using synonyms, hypernyms (up to two levels)  and derivations. Also used as part of our novel lexical-syntactic resource
 +
| 0.8% improvement in ablation test on RTE-4. Potential contribution is higher since this resource partially overlaps with the novel lexical syntactic rule base
 +
|- bgcolor="#ECECEC" align="left"
 +
| FbkIrst
 +
| RTE4
 +
| 3.0
 +
| Lexical similarity
 +
| No precise evaluation of the resource has been carried out. In our second run we used a combined system (EDITSneg + EDITSallbutneg), and we had an improvement of 0.6% in accuracy with respect to the first run in which only EDITSneg was used. EDITSallbutneg exploits lexical  similarity (WordNet similarity), but we can’t affirm with precision that the improvement is due only to the use of WordNet
 +
|- bgcolor="#ECECEC" align="left"
 +
|
 +
|
 +
|
 +
|
 +
|
 +
|- bgcolor="#ECECEC" align="left"
 +
|
 +
|
 +
|
 +
|
 +
|
 
|}
 
|}
 
{|class="wikitable" cellpadding="3" cellspacing="0" border="0" style="margin-left: 20px;"
 
{|class="wikitable" cellpadding="3" cellspacing="0" border="0" style="margin-left: 20px;"
 
|-
 
|-
! align="left"|Total: 2
+
! align="left"|Total: 8
 
|}
 
|}

Revision as of 03:29, 6 April 2009


Partecipants Campaign Version Specific usage description Evalutations / Comments
Del Monte RTE3 3.0 Semantic relation between words No evaluation of the resource
AUEB RTE3 2.1 Synonymy resolution Replacing the words of H with their synonyms in T: on RTE3 data sets 2% improvement
Boeing RTE4 2.0 Semantic relation between words No formal evaluation. Plays a role in most entailments found.
DFKI RTE4 3.0 Semantic relation between words No separate evaluation
CERES RTE4 3.0 Hypernyms, antonyms, indexWords(N,V,Adj,Adv) Used, but no evaluation performed
Cambridge RTE4 3.0 Meaning postulates from WordNet noun hyponymy, e.g. forall x: cat(x) -> animal(x) No systematic evaluation
BIU RTE4 3.0 Derived on the fly lexical entailment rules, using synonyms, hypernyms (up to two levels) and derivations. Also used as part of our novel lexical-syntactic resource 0.8% improvement in ablation test on RTE-4. Potential contribution is higher since this resource partially overlaps with the novel lexical syntactic rule base
FbkIrst RTE4 3.0 Lexical similarity No precise evaluation of the resource has been carried out. In our second run we used a combined system (EDITSneg + EDITSallbutneg), and we had an improvement of 0.6% in accuracy with respect to the first run in which only EDITSneg was used. EDITSallbutneg exploits lexical similarity (WordNet similarity), but we can’t affirm with precision that the improvement is due only to the use of WordNet
Total: 8