Difference between revisions of "SemEval Portal"

From ACL Wiki
Jump to: navigation, search
(Semantic Evaluation Exercises)
(Tasks in Semantic Evaluation)
Line 54: Line 54:
  
 
The major tasks in semantic evaluation include:
 
The major tasks in semantic evaluation include:
*Word-sense disambiguation, WSD, (lexical sample and all-words): the process of identifying which [[word sense|sense]] of a word (i.e. [[meaning (linguistics)|meaning]]) is used in a [[Sentence (linguistics)|sentence]], when the word has multiple meanings ([[polysemy]]).  WSD task has two variants: "[[lexical sample task|lexical sample]]" and "[[all-words task|all words]]" task. The former comprises disambiguating the occurrences of a small sample of target words which were previously selected, while in the latter all the words in a piece of running text need to be disambiguated.
+
* '''[[Word sense disambiguation]]''': WSD, lexical sample and all-words, the process of identifying which [[Word sense disambiguation|sense]] of a word (i.e. [[Semantics|meaning]]) is used in a [[Sentence (linguistics)|sentence]], when the word has multiple meanings ([[polysemy]]).  WSD task has two variants: "[[lexical sample task|lexical sample]]" and "[[all-words task|all words]]" task. The former comprises disambiguating the occurrences of a small sample of target words which were previously selected, while in the latter all the words in a piece of running text need to be disambiguated.
*Multi-lingual or cross-lingual word-sense disambiguation: word senses are defined according to translation distinctions, e.g., a polysemous word in Japanese is translated differently in a given context. The WSD task provides texts with target words and requires identification of the appropriate translation. A related task is cross-language information retrieval, where participants disambiguate in one language (e.g., with WordNet synsets) and retrieve documents in another language; standard information retrieval metrics are use to assess the quality of the disambiguation.
+
* '''Multi-lingual or cross-lingual word-sense disambiguation''': word senses are defined according to translation distinctions, e.g., a polysemous word in Japanese is translated differently in a given context. The WSD task provides texts with target words and requires identification of the appropriate translation. A related task is cross-language information retrieval, where participants disambiguate in one language (e.g., with WordNet synsets) and retrieve documents in another language; standard information retrieval metrics are use to assess the quality of the disambiguation.
*Subcategorization acquistion: semantically similar verbs are similar in terms of subcategorization frames. The task is to use any available method for disambiguating verb senses, so that the results can then be fed into automatic methods used for acquiring subcategorization frames, with the hypothesis that the disambiguation will cluster the instances.
+
* '''Subcategorization acquistion''': semantically similar verbs are similar in terms of subcategorization frames. The task is to use any available method for disambiguating verb senses, so that the results can then be fed into automatic methods used for acquiring subcategorization frames, with the hypothesis that the disambiguation will cluster the instances.
*Semantic role labeling
+
* '''Semantic role labeling'''
*Word-sense induction
+
* '''Word-sense induction'''
*Semantic relation identification
+
* '''Semantic relation identification'''
*Metonymy resolution
+
* '''Metonymy resolution'''
*Temporal information processing
+
* '''Temporal information processing'''
*Lexical substitution
+
* '''Lexical substitution'''
*Evaluation of lexical resources
+
* '''Evaluation of lexical resources'''
*Coreference resolution
+
* '''Coreference resolution'''
*Sentiment analysis
+
* '''Sentiment analysis'''
 
This list is expected to grow as the field progresses.
 
This list is expected to grow as the field progresses.
  

Revision as of 05:03, 19 November 2010

This page serves as a community portal for everything related to Semantic Evaluation (SemEval).

Semantic Evaluation Exercises

SemEval (Semantic Evaluation) is an ongoing series of evaluations of computational semantic analysis systems; it evolved from the Senseval Word sense evaluation series. The evaluations are intended to explore the nature of meaning in language. While meaning is intuitive to humans, transferring those intuitions to computational analysis has proved elusive.

This series of evaluations is providing a mechanism to characterize in more precise terms exactly what is necessary to compute in meaning. As such, the evaluations provide an emergent mechanism to identify the problems and solutions for computations with meaning. These exercises have evolved to articulate more of the dimensions that are involved in our use of language. They began with apparently simple attempts to identify word senses computationally. They have evolved to investigate the interrelationships among the elements in a sentence (e.g., semantic role labeling), relations between sentences (e.g., coreference), and the nature of what we are saying (semantic relations and sentiment analysis).

The purpose of the SemEval exercises and SENSEVAL is to evaluate semantic analysis systems. The first three evaluations, Senseval-1 through Senseval-3, were focused on word sense disambiguation, each time growing in the number of languages offered in the tasks and in the number of participating teams. Beginning with the 4th workshop, SemEval-2007 (SemEval-1), the nature of the tasks evolved to include semantic analysis tasks outside of word sense disambiguation. This portal will be used to provide a comprehensive view of the issues involved in semantic evaluations.

Upcoming and Past Events

Event Year Location Notes
SemEval 3 to be determined to be determined - discussion at SemEval 3 Group
SemEval 2 2010 Uppsala, Sweden - proceedings
SemEval 1 2007 Prague, Czech Republic - proceedings
- copy of website at Internet Archive
SENSEVAL 3 2004 Barcelona, Spain - proceedings
SENSEVAL 2 2001 Toulouse, France - main link provides links to results, data, system descriptions, task descriptions, and workshop program
- copy of website at Internet Archive
SENSEVAL 1 1998 East Sussex, UK - papers in Computers and the Humanities, subscribers or pay per view


Tasks in Semantic Evaluation

The major tasks in semantic evaluation include:

  • Word sense disambiguation: WSD, lexical sample and all-words, the process of identifying which sense of a word (i.e. meaning) is used in a sentence, when the word has multiple meanings (polysemy). WSD task has two variants: "lexical sample" and "all words" task. The former comprises disambiguating the occurrences of a small sample of target words which were previously selected, while in the latter all the words in a piece of running text need to be disambiguated.
  • Multi-lingual or cross-lingual word-sense disambiguation: word senses are defined according to translation distinctions, e.g., a polysemous word in Japanese is translated differently in a given context. The WSD task provides texts with target words and requires identification of the appropriate translation. A related task is cross-language information retrieval, where participants disambiguate in one language (e.g., with WordNet synsets) and retrieve documents in another language; standard information retrieval metrics are use to assess the quality of the disambiguation.
  • Subcategorization acquistion: semantically similar verbs are similar in terms of subcategorization frames. The task is to use any available method for disambiguating verb senses, so that the results can then be fed into automatic methods used for acquiring subcategorization frames, with the hypothesis that the disambiguation will cluster the instances.
  • Semantic role labeling
  • Word-sense induction
  • Semantic relation identification
  • Metonymy resolution
  • Temporal information processing
  • Lexical substitution
  • Evaluation of lexical resources
  • Coreference resolution
  • Sentiment analysis

This list is expected to grow as the field progresses.

Organization

SIGLEX, the ACL Special Interest Group on the Lexicon is the umbrella organization for the SemEval semantic evaluations and the SENSEVAL word-sense evaluation exercises. SENSEVAL is the home page for SENSEVAL 1-3. Each exercise is usually organized by two individuals, who make the call for tasks and handle the overall administration. Within the general guidelines, each task is then organized and run by individuals or groups.

SemEval on Wikipedia

On Wikipedia, a SemEval page had been created and it is calling for contributions and suggestions on how to improve the Wikipedia page and to further the understanding of computational semantics.

See also