October 26, 2016 | BY danielcer
Contact:
Eneko Agirre
Daniel Cer
Mona Diab
Lucia Specia
Semantic Textual Similarity (STS) measures the degree of equivalence in the underlying semantics of paired snippets of text. While making such an assessment is trivial for humans, constructing algorithms and computational models that mimic human level performance represents a difficult and deep natural language understanding problem. The 2017 STS shared task involves multilingual and cross-lingual evaluation of Arabic, Spanish and English data as well as a surprise language track to explore methods for cross-lingual transfer.
September 14, 2015 | BY danielcer
Event Dates:
10 Jan 2016 to 12 Aug 2016
Contact:
Eneko Agirre
Carmen Banea
Daniel Cer
Mona Diab
Aitor Gonzalez-Agirre
Weiwei Guo
Rada Mihalcea
Janyce Wiebe
Call for Shared Task Participation
SemEval 2016 Task 1: Semantic Textual Similarity (STS)
Semantic Textual Similarity (STS) measures the degree of equivalence in the underlying semantics of paired snippets of text. While making such an assessment is trivial for humans, constructing algorithms and computational models that mimic human level performance represents a difficult and deep natural language understanding (NLU) problem.
November 14, 2013 | BY Jurgens David
Event Dates:
23 Aug 2014 to 24 Aug 2014
Contact:
David Jurgens (jurgens@di.uniroma1.it)
Taher Pilehvar (pilehvar@di.uniroma1.it)
Roberto Navigli (navigli@di.uniroma1.it)
SemEval 2014 - Task 3 Cross-Level Semantic Similarity
http://alt.qcri.org/semeval2014/task3/
The aim of this task is to evaluate semantic similarity when comparing lexical items of different types, such as paragraphs, sentences, phrases, words, and senses.
INTRODUCTION:
October 27, 2010 | BY Suresh Manandhar
Contact:
Suresh Manandhar
Deniz Yuret
APOLOGIES FOR CROSS POSTING
SemEval-3
6th International Workshop on Semantic Evaluations
2nd Call for Task Proposals - Extended Deadline
The SemEval Programme committee invites proposals for tasks to be run as part of SemEval-3. We welcome tasks that can test an automatic system for semantic analysis of text, be it application dependent or independent. We especially welcome tasks for different languages and cross-lingual tasks.
For SemEval-3 we particularly encourage the following aspects in task design:
Reuse of existing annotations and training data