RTE6 - Ablation Tests
Revision as of 05:15, 3 February 2011 by Amarchetti (talk | contribs)
The following table lists the results of the ablation tests (a mandatory track since the RTE5 campaign), submitted by participants to RTE6 .
Participants are kindly invited to check if all the inserted information is correct and complete.
Ablated Component | Ablation Run[1] | Resource impact - F1 | Resource Usage Description |
---|---|---|---|
WordNet | BIU1_abl-1 | 0.9 | No Word-Net. On Dev set: 39.18% (compared to 40.73% when WN is used) |
CatVar | BIU1_abl-2 | 0.63 | No CatVar. On Dev set achieved about 40.20% (compared to 40.73% when CatVar is used) |
Coreference resolver | BIU1_abl-3 | -0.88 | No coreference resolver
On Dev set 41.62% (Compared to 40.73% when Coreference resolver is used). This ablation test is an unusual ablation test, since it shows that the co-reference resolution component has a negative impact. |
DIRT | Boeing1_abl-1 | 3.97 | DIRT removed |
WordNet | Boeing1_abl-2 | 4.42 | No WordNet |
Name Normalization | budapestcad2_abl-2 | 0.65 | no name normalization was performed (e.g. George W. Bush -> Bush). |
Named Entities Recognition | budapestcad2_abl-3 | -1.23 | no NER |
WordNet | budapestcad2_abl-4 | -1.11 | No WordNet. (In the original run, WordNet was used to find the synonyms of words in the triplets, and additional triplets were generated from all possible combinations.) |
WordNet | deb_iitb1_abl-1 | 8.68 | Wordnet is albated in this test.No change of code required only wordnet module is removed while matching. |
VerbOcean | deb_iitb1_abl-2 | 1.87 | VerbOcean is albated in this test.No change of code required only VerbOcean module is removed while matching. |
WordNet | deb_iitb2_abl-1 | 7.9 | Wordnet is albated in this test.No change of code required only wordnet module is removed while matching. |
VerbOcean | deb_iitb2_abl-2 | 0.94 | VerbOcean is albated in this test.No change of code required only VerbOcean module is removed while matching. |
WordNet | deb_iitb3_abl-1 | 11.43 | Wordnet is albated in this test. No change of code required only wordnet module is removed while matching. |
WordNet | deb_iitb3_abl-2 | 2.54 | VerbOcean is albated in this test.No change of code required only VerbOcean module is removed while matching. |
POS-Tagger | DFKI1_abl-4 | 4.99 | No wordform/POS-tags included for the comparison. |
POS-Tagger | DFKI1_abl-6 | 2.22 | No named entity recognition for the comparison. |
WordNet | DFKI1_abl-7 | -0.23 | No WordNet similarity for the comparison. |
Coreference resolver | DFKI1_Main | -1.54 | Coreference resolution used for the comparison. |
WordNet | DirRelCond23_abl-1 | 8.43 | WordNet removed. Only basic word comparison used instead of word relations. |
Wikipedia | FBK_irst3_Main | -23.91 | This run is produced by the system configuration for run3 and uses rules extracted from Wikipedia |
Wikipedia | FBK_irst3_Main | -3.58 | This run is produced by the system configuration for run3 and uses rules extracted from Wikipedia with probability above 0.7 |
Proximity similarity dictionary of Dekang Lin | FBK_irst3_Main | -7.79 | This run is produced by the system configuration for run3 and uses rules extracted from proximity similarity dictionary of Dekang Lin |
WordNet | FBK_irst3_Main | -3.21 | This run is produced by the system configuration for run3 and uses rules extracted from WordNet |
WordNet | FBK_irst3_Main | -2.08 | This run is produced by the system configuration for run3 and uses rules extracted from WordNet with probability above 0.7 |
VerbOcean | FBK_irst3_Main | -4 | This run is produced by the system configuration for run3 and uses rules extracted from Verbocean |
Dependency similarity dictionary of Dekang Lin | FBK_irst3_Main | -13.56 | This run is produced by the system configuration for run3 and uses rules extracted from dependency similarity dictionary of Dekang Lin |
Dictionary of Named Entities Acronyms and Synonyms | IKOMA2_abl-3 | -0.76 | Remove synonym dictionaries: as acronym dictionary constructed automatically from the corpus and a synonym dictionary that contains geographical terms. |
WordNet | JU_CSE_TAC1_abl-1 | 13.29 | The Run-1 is based on the composition of lexical based RTE methods and Syntactic RTE Method. The lexical based RTE methods are: WordNet based unigram match, bigram match, longest common sub-sequence, skip-gram and stemming. Here we ablated the WordNet based unigram match only. |
WordNet | JU_CSE_TAC2_abl-1 | 10.19 | The Run-2 is based on the composition of lexical based RTE methods, Syntactic RTE Method, Chunk and Named Entity Methods. The lexical based RTE methods are: WordNet based unigram match, bigram match, longest common sub-sequence, skip-gram and stemming. Here we ablated the WordNet based unigram match only. |
WordNet | JU_CSE_TAC3_abl-1 | 3.86 | The Run-3 is based on the Support Vector Machine that uses twenty five features for lexical similarity, the output tag from a rule based syntactic two-way TE system as feature, and output from a rule based Chunk Module and Named Entity Module. The important lexical features that are used in the present system are: WordNet based unigram match, bigram match, longest common sub-sequence, skip-gram, stemming and lexical distance (17 features). Here we ablated the WordNet based unigram match only. |
LingPipe co-reference | PKUTM2_abl-1 | 0.17 | Lingpipe co-reference are removed, the experiment was based on named-entity, wordnet, verbocean |
VerbOcean | PKUTM2_abl-2 | 1.02 | Verbocean are removed, the experiment was based on named-entity, wordnet, co-reference |
LingPipe Named Entities | PKUTM2_abl-3 | 13.84 | Lingpipe named-entity are removed, the experiment was based on wordnet, co-reference, verbocean |
WordNet | saicnlp1_abl-1 | -0.02 | Ablation run, with WordNet stubbed. |
Footnotes
- ↑ For further information about participants, click here: RTE Challenges - Data about participants
Return to RTE Knowledge Resources