semantic similarity

SemEval-2017 Task 2: Monolingual and Cross-lingual Word Similarity

Abbreviated Title: 
SemEval2017-Task2-MultiWordSim
Call for Participation
Submission Deadline: 
30 Jan 2017
Location: 
TBD
City: 
TBD
Contact: 
Jose Camacho-Collados
Mohammad Taher Pilehvar
Contact Email: 
collados [at] di.uniroma1.it
mp792 [at] cam.ac.uk

SemEval-2017 Task 2: Monolingual and Cross-lingual Word Similarity

ACL-Sponsored Event: 
1

SemEval 2017 Task 1: Semantic Textual Similarity (STS)

Abbreviated Title: 
STS-2017
Call for Participation
Submission Deadline: 
30 Jan 2016
Location: 
City: 
State: 
Country: 
Contact: 
Eneko Agirre
Daniel Cer
Mona Diab
Lucia Specia
Contact Email: 
sts-organizers [at] googlegroups.com

Semantic Textual Similarity (STS) measures the degree of equivalence in the underlying semantics of paired snippets of text. While making such an assessment is trivial for humans, constructing algorithms and computational models that mimic human level performance represents a difficult and deep natural language understanding problem. The 2017 STS shared task involves multilingual and cross-lingual evaluation of Arabic, Spanish and English data as well as a surprise language track to explore methods for cross-lingual transfer.

ACL-Sponsored Event: 
1

SemEval 2016: Semantic Textual Similarity [Call for Shared Task Participation]

Abbreviated Title: 
STS 2016
Call for Participation
Submission Deadline: 
28 Feb 2016
Event Dates: 
10 Jan 2016 to 12 Aug 2016
Location: 
SemEval Workshop
City: 
Berlin
Country: 
Germany
Contact: 
Eneko Agirre
Carmen Banea
Daniel Cer
Mona Diab
Aitor Gonzalez-Agirre
Weiwei Guo
Rada Mihalcea
Janyce Wiebe
Contact Email: 
sts-semeval [at] googlegroups.com
e.agirre [at] ehu.eus
carmen.banea [at] gmail.com
danielcer [at] acm.org
aitor.gonzalezagirre [at] gmail.com
weiwei [at] cs.columbia.edu
mihalcea [at] umich.edu
wiebe [at] cs.pitt.edu

Call for Shared Task Participation
SemEval 2016 Task 1: Semantic Textual Similarity (STS)

Semantic Textual Similarity (STS) measures the degree of equivalence in the underlying semantics of paired snippets of text. While making such an assessment is trivial for humans, constructing algorithms and computational models that mimic human level performance represents a difficult and deep natural language understanding (NLU) problem.

ACL-Sponsored Event: 
0

2014 BioASQ Challenges on Biomedical Semantic Indexing and Question Answering

Abbreviated Title: 
2014 BioASQ Challenges
Call for Participation
Contact: 
Ion Androutsopoulos
Contact Email: 
ion [at] aueb.gr

BioASQ challenge on large-scale biomedical semantic indexing and question answering
(part of the CLEF 2014 QA track to take place in Sheffield, UK, 15-18 September, 2014)

Web site: http://bioasq.org/
twitter: https://twitter.com/bioasq
CLEF-QA site: http://nlp.uned.es/clef-qa/

The BioASQ challenge consists of two different tasks (Task 2a and Task 2b).

If you are interested in any of the following areas:

* Large-scale and hierarchical classification
* Machine learning
* Semantic Indexing, semantic similarity

ACL-Sponsored Event: 
0
Subscribe to RSS - semantic similarity