Call for Shared Task Participation
SemEval 2016 Task 1: Semantic Textual Similarity (STS)
Semantic Textual Similarity (STS) measures the degree of equivalence in the underlying semantics of paired snippets of text. While making such an assessment is trivial for humans, constructing algorithms and computational models that mimic human level performance represents a difficult and deep natural language understanding (NLU) problem.
SemEval 2014 - Task 3 Cross-Level Semantic Similarity
The aim of this task is to evaluate semantic similarity when comparing lexical items of different types, such as paragraphs, sentences, phrases, words, and senses.
SemEval-3 : 6th International Workshop on Semantic Evaluations (Call for Task Proposals - Extended Deadline)Submitted by Suresh Manandhar on 27 October 2010 - 11:36am
APOLOGIES FOR CROSS POSTING
6th International Workshop on Semantic Evaluations
2nd Call for Task Proposals - Extended Deadline
The SemEval Programme committee invites proposals for tasks to be run as part of SemEval-3. We welcome tasks that can test an automatic system for semantic analysis of text, be it application dependent or independent. We especially welcome tasks for different languages and cross-lingual tasks.
For SemEval-3 we particularly encourage the following aspects in task design:
Reuse of existing annotations and training data