Difference between revisions of "WordNet - RTE Users"
m |
|||
Line 116: | Line 116: | ||
| RTE5 | | RTE5 | ||
| | | | ||
− | | When using WordNet, we assume that a term is semantically interchangeable with its | + | | When using WordNet, we assume that a term is semantically interchangeable with its exact occurrence, its synonyms, and its hypernyms. In extracting hypernyms, we exclude the hypernyms that are more distant than two links to the original terms in WordNet synsets. |
− | exact occurrence, its synonyms, and its hypernyms. In extracting hypernyms, we exclude the hypernyms that are more distant than two links to the original terms in WordNet synsets. | ||
| Two ablation tests performed. The first for Wordnet alone, the second for both WordNet and Framenet. Null impact of the resource(s) on two-way task for both ablations. | | Two ablation tests performed. The first for Wordnet alone, the second for both WordNet and Framenet. Null impact of the resource(s) on two-way task for both ablations. | ||
Revision as of 04:37, 14 December 2009
When not otherwise specified, the data about version, usage and evaluation of the resource have been provided by participants themselves.
Participants* | Campaign | Version | Specific usage description | Evaluations / Comments |
---|---|---|---|---|
AUEB | RTE5 | During the calculation of the similarity measures we treat words from T and H that are synonyms according to WordNet as identical. | Ablation test performed. Negative impact of the resource: -2% accuracy on two-way, -2.67% on three-way task. | |
BIU | RTE5 | 3.0 | Synonyms, hyponyms (2 levels away from the original term), the hyponym_instance relation and derivations. | Ablation test performed. Positive impact of the resource: +2.5% accuracy on two-way task. |
Boeing | RTE5 | The system makes uses of Wordnet synonyms, hypernyms relationships between (senses of) words, "similar" (SIM), "pertains" (PER), and "derivational" (DER) links to recognize equivalence between T and H. | Ablation test performed. Positive impact of the resource: +4% accuracy on two-way, +5.67% on three-way task. | |
DFKI | RTE5 | FIRST USE: Argument alignment between T and H. SECOND USE: used to change all the nominal predicates into verbs, to calculate relatedness between T and H (using VerbOcean). |
FIRST USE: Ablation test performed. Impact of the resource: -0.17% accuracy/null respectively on two-way and three-way task for run1; +0.16%/+0.34% for run2; +0.17%/+0.17% for run3. SECOND USE (WordNet+VerbOcean): null/+0.17% accuracy respectively on two-way and three-way task for run1; +0.5%/+0.67% for run2; +0.17%/+0.17% for run3. | |
DirRelCond | RTE5 | Use of many WordNet relations (such as synonymy, hypernymy, hyponymy, meronymy, holonymy etc.) to compute the relatedness between words with the same part of speech in T and H. | No ablation test performed. The resource cannot be removed without breaking the functionality of the system. | |
DLSIUAES | RTE5 | FIRST USE: Similarity between lemmata, computed by WordNet-based metrics. SECOND USE: antonymy relations between verbs. |
FIRST USE: Ablation test performed. Positive impact of the resource on two-way run: +0.83% accuracy. Negative impact on three-way run: -0.33% accuracy (-0.5% for two-way derived). SECOND USE (WordNet+VerbOcean+DLSIUAES_negation_list): positive impact on two-way run: +0.66% accuracy. Negative impact on three-way run: -1% (-0.5% for two-way derived). | |
FBKirst | RTE5 | 3.0 | Extraction of a set of 2698 English entailment rules for terms connected by the hyponymy and synonymy relations | No ablation test performed |
JU_CSE_TAC | RTE5 | WordNet based Unigram match: if any synset for the H unigram matches with any synset of a word in T then the hypothesis unigram is considered as a WordNet based unigram match. | Ablation test performed. Positive impact of the resource: +0.34% on two-way task. | |
PeMoZa | RTE5 | FIRST USE: Derivational Morphology. SECOND USE: Verb Entailment. |
Ablation tests performed. FIRST USE. Impact of the resource on two-way task: -0.5%/+1% accuracy respectively on run1 and run2. | |
QUANTA | RTE5 | Several relations from wordnet, such as synonyms, hyponym, hypernym et al. | Ablation test performed. Negative impact of the resource: -0.17% on two-way task. | |
Sagan | RTE5 | Used to obtain two features (string similarity based on Levenshtein distance and semantic similarity) in the training and testing steps of the system. | Ablation test performed. Null/negative (-0.87%) impact of the resource respectively on two-way and three-way task. | |
Siel_09 | RTE5 | Similarity between nouns using WN tool | Ablation test performed. Impact of the resource: +0.34% accuracy on two-way, -0.17% on three-way task. | |
UAIC | RTE5 | FIRST USE: Antonymy relation to detect contradiction. In order to broaden the domain of the antonymy relation, we consider a combination of synonyms and antonyms. Used in combination with VerbOcean. SECOND USE: Synonymy, hyponymy and hypernymy for nouns and adjectives. Used in combination with eXtended WordNet relations. |
FIRST USE: Ablation test performed (Wordnet + VerbOcean). Positive impact of the two resources together: +2% accuracy on two-way, +1.5% on three-way task. SECOND USE: Ablation test performed (Wordnet + eXtended WordNet). Positive impact of the two resources together: +1% accuracy on two-way, +1.33% on three-way task. | |
UB.dmirg | RTE5 | When using WordNet, we assume that a term is semantically interchangeable with its exact occurrence, its synonyms, and its hypernyms. In extracting hypernyms, we exclude the hypernyms that are more distant than two links to the original terms in WordNet synsets. | Two ablation tests performed. The first for Wordnet alone, the second for both WordNet and Framenet. Null impact of the resource(s) on two-way task for both ablations. | |
AUEB | RTE4 | Data taken from the RTE4 proceedings. Participants are recommended to add further information. | ||
BIU | RTE4 | 3.0 | Synonyms, hyponyms (2 levels away from the original term), the hyponym_instance relation and derivations. Also used as part of our novel lexical-syntactic resource | 0.8% improvement in ablation test on RTE-4. Potential contribution is higher since this resource partially overlaps with the novel lexical-syntactic rule base |
Boeing | RTE4 | 2.0 | Semantic relation between words | No formal evaluation. Plays a role in most entailments found |
Cambridge | RTE4 | 3.0 | Meaning postulates from WordNet noun hyponymy, e.g. forall x: cat(x) -> animal(x) | No systematic evaluation |
CERES | RTE4 | 3.0 | Hypernyms, antonyms, indexWords (N,V,Adj,Adv) | Used, but no evaluation performed |
DFKI | RTE4 | 3.0 | Semantic relation between words | No separate evaluation |
DLSIUAES | RTE4 | Data taken from the RTE4 proceedings. Participants are recommended to add further information. | ||
EMORY | RTE4 | Data taken from the RTE4 proceedings. Participants are recommended to add further information. | ||
FbkIrst | RTE4 | 3.0 | Lexical similarity | No precise evaluation of the resource has been carried out. In our second run we used a combined system (EDITSneg + EDITSallbutneg), and we had an improvement of 0.6% in accuracy with respect to the first run in which only EDITSneg was used. EDITSallbutneg exploits lexical similarity (WordNet similarity), but we can’t affirm with precision that the improvement is due only to the use of WordNet |
FSC | RTE4 | Data taken from the RTE4 proceedings. Participants are recommended to add further information. | ||
IIT | RTE4 | Data taken from the RTE4 proceedings. Participants are recommended to add further information. | ||
IPD | RTE4 | Data taken from the RTE4 proceedings. Participants are recommended to add further information. | ||
OAQA | RTE4 | Data taken from the RTE4 proceedings. Participants are recommended to add further information. | ||
QUANTA | RTE4 | Data taken from the RTE4 proceedings. Participants are recommended to add further information. | ||
SAGAN | RTE4 | Data taken from the RTE4 proceedings. Participants are recommended to add further information. | ||
Stanford | RTE4 | Data taken from the RTE4 proceedings. Participants are recommended to add further information. | ||
UAIC | RTE4 | Data taken from the RTE4 proceedings. Participants are recommended to add further information. | ||
UMD | RTE4 | Data taken from the RTE4 proceedings. Participants are recommended to add further information. | ||
UNED | RTE4 | Data taken from the RTE4 proceedings. Participants are recommended to add further information. | ||
Uoeltg | RTE4 | Data taken from the RTE4 proceedings. Participants are recommended to add further information. | ||
UPC | RTE4 | Data taken from the RTE4 proceedings. Participants are recommended to add further information. | ||
AUEB | RTE3 | 2.1 | Synonymy resolution | Replacing the words of H with their synonyms in T: on RTE3 data sets 2% improvement |
UIUC | RTE3 | Semantic distance between words | ||
VENSES | RTE3 | 3.0 | Semantic relation between words | No evaluation of the resource |
New user | Participants are encouraged to contribute. |
Total: 24 |
---|
[*] For further information about participants, click here: RTE Challenges - Data about participants
Return to RTE Knowledge Resources