Difference between revisions of "SemEval Portal"

From ACL Wiki
Jump to: navigation, search
(SemEval on Wikipedia: The page has escaped deletion, although it can still perhaps be improved.)
Line 2: Line 2:
 
This page serves as a community portal for everything related to Semantic Evaluation ('''SemEval''').  
 
This page serves as a community portal for everything related to Semantic Evaluation ('''SemEval''').  
 
==Semantic Evaluation Exercises==
 
==Semantic Evaluation Exercises==
 +
 +
'''SemEval''' (Semantic Evaluation) is an ongoing series of evaluations of [[Semantic analysis (computational)|computational semantic analysis]] systems; it evolved from the Senseval [[Sense|Word sense]] evaluation series. The evaluations are intended to explore the nature of [[Meaning_(linguistics)|meaning]] in language. While meaning is intuitive to humans, transferring those intuitions to computational analysis has proved elusive.
 +
 +
This series of evaluations is providing a mechanism to characterize in more precise terms exactly what is necessary to compute in meaning. As such, the evaluations provide an emergent mechanism to identify the problems and solutions for computations with meaning. These exercises have evolved to articulate more of the dimensions that are involved in our use of language. They began with apparently simple attempts to identify [[word sense]]s computationally. They have evolved to investigate the interrelationships among the elements in a sentence (e.g., [[semantic role labeling]]), relations between sentences (e.g., [[coreference]]), and the nature of what we are saying ([[semantic relations]] and [[sentiment analysis]]).
  
 
The purpose of the SemEval exercises and [http://www.senseval.org SENSEVAL] is to evaluate semantic analysis systems. The first three evaluations, Senseval-1 through Senseval-3, were focused on word sense disambiguation, each time growing in the number of languages offered in the tasks and in the number of participating teams. Beginning with the 4th workshop, SemEval-2007 (SemEval-1), the nature of the tasks evolved to include semantic analysis tasks outside of word sense disambiguation. This portal will be used to provide a comprehensive view of the issues involved in semantic evaluations.
 
The purpose of the SemEval exercises and [http://www.senseval.org SENSEVAL] is to evaluate semantic analysis systems. The first three evaluations, Senseval-1 through Senseval-3, were focused on word sense disambiguation, each time growing in the number of languages offered in the tasks and in the number of participating teams. Beginning with the 4th workshop, SemEval-2007 (SemEval-1), the nature of the tasks evolved to include semantic analysis tasks outside of word sense disambiguation. This portal will be used to provide a comprehensive view of the issues involved in semantic evaluations.
Line 50: Line 54:
  
 
The major tasks in semantic evaluation include:
 
The major tasks in semantic evaluation include:
*Word-sense disambiguation (lexical sample)
+
*Word-sense disambiguation (lexical sample and all-words): the process of identifying which [[word sense|sense]] of a word (i.e. [[meaning (linguistics)|meaning]]) is used in a [[Sentence (linguistics)|sentence]], when the word has multiple meanings ([[polysemy]]).  WSD task has two variants: "[[lexical sample task|lexical sample]]" and "[[all-words task|all words]]" task. The former comprises disambiguating the occurrences of a small sample of target words which were previously selected, while in the latter all the words in a piece of running text need to be disambiguated.
*Word-sense disambiguation (all-words)
+
 
*Multi-lingual or cross-lingual word-sense disambiguation
 
*Multi-lingual or cross-lingual word-sense disambiguation
 
*Subcategorization acquistion
 
*Subcategorization acquistion

Revision as of 20:12, 18 November 2010

This page serves as a community portal for everything related to Semantic Evaluation (SemEval).

Semantic Evaluation Exercises

SemEval (Semantic Evaluation) is an ongoing series of evaluations of computational semantic analysis systems; it evolved from the Senseval Word sense evaluation series. The evaluations are intended to explore the nature of meaning in language. While meaning is intuitive to humans, transferring those intuitions to computational analysis has proved elusive.

This series of evaluations is providing a mechanism to characterize in more precise terms exactly what is necessary to compute in meaning. As such, the evaluations provide an emergent mechanism to identify the problems and solutions for computations with meaning. These exercises have evolved to articulate more of the dimensions that are involved in our use of language. They began with apparently simple attempts to identify word senses computationally. They have evolved to investigate the interrelationships among the elements in a sentence (e.g., semantic role labeling), relations between sentences (e.g., coreference), and the nature of what we are saying (semantic relations and sentiment analysis).

The purpose of the SemEval exercises and SENSEVAL is to evaluate semantic analysis systems. The first three evaluations, Senseval-1 through Senseval-3, were focused on word sense disambiguation, each time growing in the number of languages offered in the tasks and in the number of participating teams. Beginning with the 4th workshop, SemEval-2007 (SemEval-1), the nature of the tasks evolved to include semantic analysis tasks outside of word sense disambiguation. This portal will be used to provide a comprehensive view of the issues involved in semantic evaluations.

Upcoming and Past Events

Event Year Location Notes
SemEval 3 to be determined to be determined - discussion at SemEval 3 Group
SemEval 2 2010 Uppsala, Sweden - proceedings
SemEval 1 2007 Prague, Czech Republic - proceedings
- copy of website at Internet Archive
SENSEVAL 3 2004 Barcelona, Spain - proceedings
SENSEVAL 2 2001 Toulouse, France - main link provides links to results, data, system descriptions, task descriptions, and workshop program
- copy of website at Internet Archive
SENSEVAL 1 1998 East Sussex, UK - papers in Computers and the Humanities, subscribers or pay per view


Tasks in Semantic Evaluation

The major tasks in semantic evaluation include:

  • Word-sense disambiguation (lexical sample and all-words): the process of identifying which sense of a word (i.e. meaning) is used in a sentence, when the word has multiple meanings (polysemy). WSD task has two variants: "lexical sample" and "all words" task. The former comprises disambiguating the occurrences of a small sample of target words which were previously selected, while in the latter all the words in a piece of running text need to be disambiguated.
  • Multi-lingual or cross-lingual word-sense disambiguation
  • Subcategorization acquistion
  • Semantic role labeling
  • Word-sense induction
  • Semantic relation identification
  • Metonymy resolution
  • Temporal information processing
  • Lexical substitution
  • Evaluation of lexical resources
  • Coreference resolution
  • Sentiment analysis

This list is expected to grow as the field progresses.

Organization

SIGLEX, the ACL Special Interest Group on the Lexicon is the umbrella organization for the SemEval semantic evaluations and the SENSEVAL word-sense evaluation exercises. SENSEVAL is the home page for SENSEVAL 1-3. Each exercise is usually organized by two individuals, who make the call for tasks and handle the overall administration. Within the general guidelines, each task is then organized and run by individuals or groups.

SemEval on Wikipedia

On Wikipedia, a SemEval page had been created and it is calling for contributions and suggestions on how to improve the Wikipedia page and to further the understanding of computational semantics.

See also