Difference between revisions of "Temporal Information Extraction (State of the art)"
Jump to navigation
Jump to search
Line 41: | Line 41: | ||
| | | | ||
| | | | ||
− | | | + | | 90.91% |
− | | | + | | 85.95% |
| 77.61% | | 77.61% | ||
| | | | ||
Line 56: | Line 56: | ||
| | | | ||
| | | | ||
− | | | + | | 88.90% |
− | | | + | | 78.58% |
| 70.97% | | 70.97% | ||
| | | | ||
Line 71: | Line 71: | ||
| | | | ||
| | | | ||
− | | | + | | 86.31% |
− | | | + | | 76.92% |
| 68.97% | | 68.97% | ||
| | | | ||
Line 86: | Line 86: | ||
| | | | ||
| | | | ||
− | | | + | | 88.90% |
− | | | + | | 74.60% |
| 67.38% | | 67.38% | ||
| | | | ||
Line 101: | Line 101: | ||
| | | | ||
| | | | ||
− | | | + | | 91.34% |
− | | | + | | 76.91% |
| 65.57% | | 65.57% | ||
| | | | ||
Line 116: | Line 116: | ||
| | | | ||
| | | | ||
− | | | + | | 93.33% |
− | | | + | | 71.66% |
| 64.66% | | 64.66% | ||
| | | | ||
Line 131: | Line 131: | ||
| | | | ||
| | | | ||
− | | | + | | 87.39% |
− | | | + | | 73.87% |
| 63.81% | | 63.81% | ||
| | | | ||
Line 146: | Line 146: | ||
| | | | ||
| | | | ||
− | | | + | | 88.56% |
− | | | + | | 75.24% |
| 62.95% | | 62.95% | ||
| | | | ||
Line 161: | Line 161: | ||
| | | | ||
| | | | ||
− | | | + | | 81.08% |
− | | | + | | 68.47% |
| 58.24% | | 58.24% | ||
| | | |
Revision as of 02:18, 11 June 2013
Data sets
Performance measures
Results
The following results refers to the TempEval-3 challenge, the last evaluation exercise.
Task A: Temporal expression extraction and normalisation
The table shows the best result for each system. Different runs per system are not shown.
System name | Short description | Main publication | Identification | Normalisation | Overall score | Software | License | ||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Strict matching | Lenient matching | Accuracy | |||||||||||
Pre. | Rec. | F1 | Pre. | Rec. | F1 | Type | Value | ||||||
HeidelTime | Stro ̈tgen et al., 2013 | 90.91% | 85.95% | 77.61% | |||||||||
NavyTime | Chambers et al., 2013 | 88.90% | 78.58% | 70.97% | |||||||||
ManTIME | Filannino et al., 2013 | 86.31% | 76.92% | 68.97% | |||||||||
SUTime | Chang et al., 2013 | 88.90% | 74.60% | 67.38% | |||||||||
ATT | Jung et al., 2013 | 91.34% | 76.91% | 65.57% | |||||||||
ClearTK | Bethard, 2013 | 93.33% | 71.66% | 64.66% | |||||||||
JU-CSE | Kolya et al., 2013 | 87.39% | 73.87% | 63.81% | |||||||||
KUL | Kolomiyets et al., 2013 | 88.56% | 75.24% | 62.95% | |||||||||
FSS-TimEx | Zavarella et al., 2013 | 81.08% | 68.47% | 58.24% |
Task B: Event extraction and classification
Task C: Annotating relations given gold entities
Challenges
- TempEval, Temporal Relation Identification, 2007: web page
- TempEval-2, Evaluating Events, Time Expressions, and Temporal Relations, 2010: web page
- TempEval-3, Evaluating Time Expressions, Events, and Temporal Relations, 2013: web page
References
- UzZaman, N., Llorens, H., Derczynski, L., Allen, J., Verhagen, M., and Pustejovsky, J. Semeval-2013 task 1: Tempeval-3: Evaluating time expressions, events, and temporal relations. In Second Joint Conference on Lexical and Computational Semantics (*SEM), Volume 2: Proceedings of the Seventh International Workshop on Semantic Evaluation (SemEval 2013) (Atlanta, Georgia, USA, June 2013), Association for Computational Linguistics, pp. 1–9.
- Bethard, S. ClearTK-TimeML: A minimalist approach to tempeval 2013. In Second Joint Conference on Lexical and Computational Semantics (*SEM), Volume 2: Proceedings of the Seventh International Workshop on Semantic Evaluation (SemEval 2013) (Atlanta, Georgia, USA, June 2013), vol. 2, Association for Computational Linguistics, Association for Computational Linguistics, pp. 10–14.
- Stro ̈tgen, J., Zell, J., and Gertz, M. Heideltime: Tuning english and developing spanish resources for tempeval-3. In Second Joint Conference on Lexical and Computational Semantics (*SEM), Volume 2: Proceedings of the Seventh International Workshop on Semantic Evaluation (SemEval 2013) (Atlanta, Georgia, USA, June 2013), Association for Computational Linguistics, pp. 15–19.
- Jung, H., and Stent, A. ATT1: Temporal annotation using big windows and rich syntactic and semantic features. In Second Joint Conference on Lexical and Computational Semantics (*SEM), Volume 2: Proceedings of the Seventh International Workshop on Semantic Evaluation (SemEval 2013) (Atlanta, Georgia, USA, June 2013), Association for Computational Linguistics, pp. 20–24.
- Filannino, M., Brown, G., and Nenadic, G. ManTIME: Temporal expression identification and normalization in the Tempeval-3 challenge. In Second Joint Conference on Lexical and Computational Semantics (*SEM), Volume 2: Proceedings of the Seventh International Workshop on Semantic Evalu- ation (SemEval 2013) (Atlanta, Georgia, USA, June 2013), Association for Computational Linguistics, pp. 53–57.
- Zavarella, V., and Tanev, H. FSS-TimEx for tempeval-3: Extracting temporal information from text. In Second Joint Conference on Lexical and Computational Semantics (*SEM), Volume 2: Proceedings of the Seventh International Workshop on Semantic Evaluation (SemEval 2013) (Atlanta, Georgia, USA, June 2013), Association for Computational Linguistics, pp. 58–63.
- Kolya, A. K., Kundu, A., Gupta, R., Ekbal, A., and Bandyopadhyay, S. JU_CSE: A CRF based approach to annotation of temporal expression, event and temporal relations. In Second Joint Conference on Lexical and Computational Semantics (*SEM), Volume 2: Proceedings of the Seventh International Workshop on Semantic Evaluation (SemEval 2013) (Atlanta, Georgia, USA, June 2013), Association for Computational Linguistics, pp. 64–72.
- Chambers, N. Navytime: Event and time ordering from raw text. In Second Joint Conference on Lexical and Computational Semantics (*SEM), Volume 2: Proceedings of the Seventh International Workshop on Semantic Evaluation (SemEval 2013) (Atlanta, Georgia, USA, June 2013), Association for Computational Linguistics, pp. 73–77.
- Chang, A., and Manning, C. D. SUTime: Evaluation in TempEval-3. In Second Joint Conference on Lexical and Computational Semantics (*SEM), Volume 2: Proceedings of the Seventh International Workshop on Semantic Evaluation (SemEval 2013) (Atlanta, Georgia, USA, June 2013), Association for Computational Linguistics, pp. 78–82.
- Kolomiyets, O., and Moens, M.-F. KUL: Data-driven approach to temporal parsing of newswire articles. In Second Joint Conference on Lexical and Computational Semantics (*SEM), Volume 2: Proceed- ings of the Seventh International Workshop on Semantic Evaluation (SemEval 2013) (Atlanta, Georgia, USA, June 2013), Association for Computational Linguistics, pp. 83–87.
- Laokulrat, N., Miwa, M., Tsuruoka, Y., and Chikayama, T. UTTime: Temporal relation classification using deep syntactic features. In Second Joint Conference on Lexical and Computational Se- mantics (*SEM), Volume 2: Proceedings of the Seventh International Workshop on Semantic Evaluation (SemEval 2013) (Atlanta, Georgia, USA, June 2013), Association for Computational Linguistics, pp. 88– 92.