Difference between revisions of "DIRT Paraphrase Collection - RTE Users"

From ACL Wiki
Jump to navigation Jump to search
(New page: <br /> {|class="wikitable sortable" cellpadding="3" cellspacing="0" border="1" style="margin-left: 20px;" |- bgcolor="#CDCDCD" ! nowrap="nowrap"|Partecipants ! nowrap="nowrap"|Campaign ! n...)
 
 
(19 intermediate revisions by the same user not shown)
Line 1: Line 1:
 
<br />
 
<br />
{|class="wikitable sortable" cellpadding="3" cellspacing="0" border="1" style="margin-left: 20px;"
+
{|class="wikitable sortable" cellpadding="3" cellspacing="0" border="1"
 
|- bgcolor="#CDCDCD"
 
|- bgcolor="#CDCDCD"
! nowrap="nowrap"|Partecipants
+
! nowrap="nowrap"|Participants*
 
! nowrap="nowrap"|Campaign
 
! nowrap="nowrap"|Campaign
 
! nowrap="nowrap"|Version
 
! nowrap="nowrap"|Version
 
! nowrap="nowrap"|Specific usage description
 
! nowrap="nowrap"|Specific usage description
! nowrap="nowrap"|Evalutations / Comments
+
! nowrap="nowrap"|Evaluations / Comments
 +
 
 +
|- bgcolor="#ECECEC" align="left"
 +
| BIU
 +
| RTE5
 +
|
 +
| We used the canonical DIRT rulebase version of Szpektor and Dagan (RANLP 2007), and considered top 25 rules.
 +
| Ablation test performed. Positive impact of the resource: +1.33% accuracy on two-way task.
 +
 
 
|- bgcolor="#ECECEC" align="left"
 
|- bgcolor="#ECECEC" align="left"
 
| Boeing
 
| Boeing
| RTE4
+
| RTE5
| Original DIRT db
+
|  
| Elaborate T sentence with DIRT-implied entailments
+
| Verb paraphrases
| precision/recall in RTE4:
+
| Ablation test performed. Negative impact of the resource on two-way task: -1.17% accuracy. Null impact of the resource on three-way task.
boeing run1: 67%/6%;
+
 
boeing run2: 54%/30%
+
|- bgcolor="#ECECEC" align="left"
 +
| UAIC
 +
| RTE5
 +
|
 +
| Use of DIRT relations to map verbs in T with verbs in H
 +
| Ablation test performed. Positive impact of the resource: +0.17% accuracy on two-way, +0.33% on three-way task.
 +
 
 
|- bgcolor="#ECECEC" align="left"
 
|- bgcolor="#ECECEC" align="left"
 
| BIU
 
| BIU
Line 20: Line 34:
 
|  
 
|  
 
| We used the canonical DIRT rulebase version of Szpektor and Dagan (RANLP 2007), and considered top 25 rules.
 
| We used the canonical DIRT rulebase version of Szpektor and Dagan (RANLP 2007), and considered top 25 rules.
| Verb-net + Nom-lex plus + Parc Polarity Lexicon + DIRT Paraphrase Collection = +0.9% on RTE-4 ablation tests.
+
| +0.9% on RTE-4 ablation tests
 +
 
 +
|- bgcolor="#ECECEC" align="left"
 +
| Boeing
 +
| RTE4
 +
| Original DIRT db
 +
| Elaborate T sentence with DIRT-implied entailments
 +
| precision/recall in RTE4:<br>
 +
boeing run1: 67%/6%;<br>
 +
boeing run2: 54%/30%
 +
 
 
|- bgcolor="#ECECEC" align="left"
 
|- bgcolor="#ECECEC" align="left"
 
| UAIC
 
| UAIC
 
| RTE4
 
| RTE4
 
|  
 
|  
|  
+
| Use of DIRT relations to map verbs in T with verbs in H
| ''Data extracted from proceedings. Partecipants are recommended to edit fields''
+
| Ablation test performed: +0.7% precision on two-way task.
 +
 
 
|- bgcolor="#ECECEC" align="left"
 
|- bgcolor="#ECECEC" align="left"
 
| Uoeltg
 
| Uoeltg
Line 32: Line 57:
 
|  
 
|  
 
|  
 
|  
| ''Data extracted from proceedings. Partecipants are recommended to edit fields''
+
| ''Data taken from the RTE4 proceedings. Participants are recommended to add further information.''
 +
 
 
|- bgcolor="#ECECEC" align="left"
 
|- bgcolor="#ECECEC" align="left"
| ''New user''
+
| UAIC
 +
| RTE3
 
|  
 
|  
 +
| Use of DIRT relations to map verbs in T with verbs in H
 +
| Ablation test performed: +0.37% precision on two-way task.
 +
 +
|- bgcolor="#ECECEC" align="left"
 +
| UIUC
 +
| RTE3
 
|  
 
|  
 +
| Paired verb/argument patterns
 
|  
 
|  
| ''Partecipants are recommended to edit fields''
+
 
 
|}
 
|}
 
{|class="wikitable" cellpadding="3" cellspacing="0" border="0" style="margin-left: 20px;"
 
{|class="wikitable" cellpadding="3" cellspacing="0" border="0" style="margin-left: 20px;"
 
|-
 
|-
! align="left"|Total: 4
+
! align="left"|Total: 9
 
|}
 
|}
 +
<br>
 +
 +
[*] For further information about participants, click here: [[RTE Challenges - Data about participants]]
 +
 +
    Return to [[RTE Knowledge Resources]]

Latest revision as of 10:32, 22 December 2009


Participants* Campaign Version Specific usage description Evaluations / Comments
BIU RTE5 We used the canonical DIRT rulebase version of Szpektor and Dagan (RANLP 2007), and considered top 25 rules. Ablation test performed. Positive impact of the resource: +1.33% accuracy on two-way task.
Boeing RTE5 Verb paraphrases Ablation test performed. Negative impact of the resource on two-way task: -1.17% accuracy. Null impact of the resource on three-way task.
UAIC RTE5 Use of DIRT relations to map verbs in T with verbs in H Ablation test performed. Positive impact of the resource: +0.17% accuracy on two-way, +0.33% on three-way task.
BIU RTE4 We used the canonical DIRT rulebase version of Szpektor and Dagan (RANLP 2007), and considered top 25 rules. +0.9% on RTE-4 ablation tests
Boeing RTE4 Original DIRT db Elaborate T sentence with DIRT-implied entailments precision/recall in RTE4:

boeing run1: 67%/6%;
boeing run2: 54%/30%

UAIC RTE4 Use of DIRT relations to map verbs in T with verbs in H Ablation test performed: +0.7% precision on two-way task.
Uoeltg RTE4 Data taken from the RTE4 proceedings. Participants are recommended to add further information.
UAIC RTE3 Use of DIRT relations to map verbs in T with verbs in H Ablation test performed: +0.37% precision on two-way task.
UIUC RTE3 Paired verb/argument patterns
Total: 9


[*] For further information about participants, click here: RTE Challenges - Data about participants

   Return to RTE Knowledge Resources