RTE7 - Ablation Tests
The following table lists the results of the ablation tests submitted by participants to RTE7 .
The exploratory effort about knowledge resources, started in RTE5 and extended to tools in RTE-6, was proposed also in RTE-7.
In the table below, the first column contains the specific resources which have been ablated.
The second column lists the Team Run in the form [name_of_the_Team][number_of_the_submitted_run].[submission_task] (e.g. BIU1.2way, Boeing3.3way).
The third column presents the normalized difference between the accuracy of the complete system run and the accuracy of the ablation run (i.e. the output of the complete system without the ablated resource), showing the impact of the resource on the performance of the system.
The fourth column contains a brief description of the specific usage of the resource. It is based on the information provided both in the "readme" files submitted together with the ablation tests and in the system reports published in the RTE7 proceedings.
If the ablated resource is highlighted in yellow, it is a tool, otherwise is a knowledge resource.
Participants are kindly invited to check if all the inserted information is correct and complete.
Ablated Component | Ablation Run[1] | Resource impact - F1 | Resource Usage Description |
---|---|---|---|
WordNet | BIU2_abl-1 | -0.05 | Without WordNet, which is used as a lexical rulebase resource |
Direct | BIU1_abl-2 | 0.94 | Without Bap (AKA "Direct"), which is used as a lexical rulebase resource |
Wikipedia | BIU1_abl-3 | 1.56 | Without Wikipedia, which is used as a lexical rulebase resource |
Coreference resolver | BIU1_abl-4 | 0.69 | Without any coreference resolution engine, instead of sing ArkRef to obtain coref information from the text, when preprocessing it |
WordNet | DFKI_abl-1 | -0.14 | Features based on WordNet similarity measures (JWNL). |
Named Entity Recognition | DFKI_abl-2 | 2.08 | Features based on WordNet similarity measures (JWNL). |
Wikipedia | FBK_irst3_abl-2 | -2.64 | Ablating wikipedia LSA similarity scores. |
Named Entity Recognition | FBK_irst3_abl-3 | -0.89 | Ablating named entities matching module. |
Paraphrase Table | FBK_irst3_abl-4 | -1.43 | Ablating paraphrase matching module. The paraphrases were extracted from parallel corpora. |
Acronym List | IKOMA3_abl-1 | -0.16 | No acronyms of organization names extracted from the corpus. |
CatVar | IKOMA3_abl-2 | 0.84 | No CatVar. |
WordNet | IKOMA3_abl-3 | 0.85 | No WordNet. |
WordNet | JU_CSE_TAC1_abl-1 | 9.81 | WordNet Ablated |
Named Entity Recognition | JU_CSE_TAC1_abl-2 | 7.97 | NER Ablated |
WordNet | SINAI1_abl-1 | -0.12 | Resource ablated: lexical similarity module based on Personalized Page Rank vectors over WordNet 3.0 |
Wikipedia | SJTU_CIT1_abl-1 | 8.89 | we removed wikipedia resouce |
VerbOcean | SJTU_CIT1_abl-2 | 5.93 | we removed verbocern resource |
WordNet | u_tokyo1_abl-1 | 0.83 | Ablated resource is WordNet |
WordNet | u_tokyo2_abl-2 | 0.64 | Ablated resource is WordNet |
WordNet | u_tokyo2_abl-3 | 0.99 | Ablated resource is WordNet |
UAIC Knowledge Resource | UAIC20112_abl-1 | 0 | Ablation of the BK (acronym database and world knowledge component) |
Named Entity Recognition | UAIC20112_abl-3 | -8.29 | Ablation of the NE resources. |
Footnotes
- ↑ For further information about participants, click here: RTE Challenges - Data about participants
Return to RTE Knowledge Resources