https://aclweb.org/aclwiki/api.php?action=feedcontributions&user=Kenski&feedformat=atomACL Wiki - User contributions [en]2024-03-19T12:24:19ZUser contributionsMediaWiki 1.35.2https://aclweb.org/aclwiki/index.php?title=SemEval_Portal&diff=9019SemEval Portal2011-10-10T16:59:17Z<p>Kenski: </p>
<hr />
<div>This page serves as a community portal for everything related to Semantic Evaluation ('''SemEval'''). <br />
<br />
Quick links:<br />
<br />
* [[SemEval 3]]<br />
* [http://www.clres.com/siglex.html SIGLEX]<br />
<br />
==Semantic Evaluation Exercises==<br />
<br />
'''SemEval''' (Semantic Evaluation) is an ongoing series of evaluations of [[Semantics|computational semantic analysis]] systems; it evolved from the Senseval [[Word sense disambiguation|Word sense]] evaluation series. The evaluations are intended to explore the nature of [[Semantics|meaning]] in language. While meaning is intuitive to humans, transferring those intuitions to computational analysis has proved elusive.<br />
<br />
This series of evaluations is providing a mechanism to characterize in more precise terms exactly what is necessary to compute in meaning. As such, the evaluations provide an emergent mechanism to identify the problems and solutions for computations with meaning. These exercises have evolved to articulate more of the dimensions that are involved in our use of language. They began with apparently simple attempts to identify [[Word sense disambiguation|word senses]] computationally. They have evolved to investigate the interrelationships among the elements in a sentence (e.g., [[semantic role labeling]]), relations between sentences (e.g., [[coreference]]), and the nature of what we are saying ([[semantic relations]] and [[sentiment analysis]]).<br />
<br />
The purpose of the SemEval exercises and [http://www.senseval.org SENSEVAL] is to evaluate semantic analysis systems. The first three evaluations, Senseval-1 through Senseval-3, were focused on word sense disambiguation, each time growing in the number of languages offered in the tasks and in the number of participating teams. Beginning with the 4th workshop, SemEval-2007 (SemEval-1), the nature of the tasks evolved to include semantic analysis tasks outside of word sense disambiguation. This portal will be used to provide a comprehensive view of the issues involved in semantic evaluations.<br />
<br />
==Upcoming and Past Events==<br />
<br />
{| border="1" cellpadding="7" cellspacing="0" <br />
|-<br />
! Event<br />
! Year<br />
! Location<br />
! Notes<br />
|-<br />
| [http://www.cs.york.ac.uk/semeval/ SemEval 3]<br />
| align="center" | 2013<br />
| to be determined<br />
| - discussion at [http://groups.google.com/group/semeval3 SemEval 3 Group]<br />
|-<br />
| [http://semeval2.fbk.eu/semeval2.php SemEval 2]<br />
| align="center" | 2010<br />
| Uppsala, Sweden<br />
| - [http://aclweb.org/anthology-new/S/S10/ proceedings]<br />
|-<br />
| [http://nlp.cs.swarthmore.edu/semeval/index.php SemEval 1]<br />
| align="center" | 2007 <br />
| Prague, Czech Republic<br />
| - [http://aclweb.org/anthology-new/S/S07/ proceedings] <br /> - copy of website at [http://web.archive.org/web/20080727062358/http://nlp.cs.swarthmore.edu/semeval/index.php Internet Archive]<br />
|-<br />
| [http://www.senseval.org/senseval3 SENSEVAL 3]<br />
| align="center" | 2004<br />
| Barcelona, Spain<br />
| - [http://aclweb.org/anthology-new/W/W04/#0800 proceedings]<br />
|-<br />
| [http://www.sle.sharp.co.uk/senseval2 SENSEVAL 2]<br />
| align="center" | 2001<br />
| Toulouse, France<br />
| - main link provides links to results, data, system descriptions, task descriptions, and workshop program <br /> - copy of website at [http://web.archive.org/web/20050507011044/http://www.sle.sharp.co.uk/senseval2/ Internet Archive]<br />
|-<br />
| [http://www.itri.brighton.ac.uk/events/senseval/ARCHIVE/index.html SENSEVAL 1]<br />
| align="center" | 1998<br />
| East Sussex, UK<br />
| - papers in [http://www.springerlink.com/content/0010-4817/34/1-2/ Computers and the Humanities], subscribers or pay per view<br />
|-<br />
|}<br />
<br />
==Overview of Issues in Semantic Analysis==<br />
<br />
The SemEval exercises provide a mechanism for examining issues in semantic analysis of texts. The topics of interest are concerned with identifying and characterizing the kinds of issues relevant to human understanding of language; the topics are generally different from the concerns of the logic-based approach of formal computational semantics. The primary goal is to replicate human processing by means of computer systems. The tasks (shown below) are developed by individuals and groups to deal with identifiable issues, as they take on some concrete form.<br />
<br />
The first major area in semantic analysis is the identification of the intended meaning at the word level (taken to include idiomatic expressions). This is word-sense disambiguation (a concept that is evolving away from the notion that words have discrete senses, but rather are characterized by the ways in which they are used, i.e., their contexts). The tasks in this area include lexical sample and all-word disambiguation, multi- and cross-lingual disambiguation, and lexical substitution. Given the difficulties of identifying word senses, other tasks relevant to this topic include word-sense induction, subcategorization acquisition, and evaluation of lexical resources. The tasks in this area may be characterized as dealing with dictionary issues.<br />
<br />
The second major area in semantic analysis is the understanding of how different sentence and textual elements fit together. Tasks in this area include semantic role labeling, semantic relation analysis, and coreference resolution. Other tasks in this area look at more specialized issues of semantic analysis, such as temporal information processing, metonymy resolution, and sentiment analysis. The tasks in this area have many potential applications, such as information extraction, question answering, document summarization, machine translation, construction of thesauri and semantic networks, language modeling, paraphrasing, and recognizing textual entailment. In each of these potential applications, the contribution of the types of semantic analysis constitutes the most outstanding research issue.<br />
<br />
==Tasks in Semantic Evaluation==<br />
<br />
The major tasks in semantic evaluation include:<br />
* '''[[Word sense disambiguation]]''': WSD, lexical sample and all-words, the process of identifying which [[Word sense disambiguation|sense]] of a word (i.e. [[Semantics|meaning]]) is used in a [[Sentence (linguistics)|sentence]], when the word has multiple meanings ([[polysemy]]). The WSD task has two variants: "[[lexical sample task|lexical sample]]" and "[[all-words task|all words]]" task. The former comprises disambiguating the occurrences of a small sample of target words which were previously selected, while in the latter all the words in a piece of running text need to be disambiguated. Tasks have been performed for many languages. Tasks have covered disambiguation of nouns, verbs, adjectives, and prepositions. A new task is evaluating phrasal semantics (compositionality and semantic similarity of phrases).<br />
* '''Multi-lingual or cross-lingual word-sense disambiguation''': word senses are defined according to translation distinctions, e.g., a polysemous word in Japanese is translated differently in a given context. The WSD task provides texts with target words and requires identification of the appropriate translation. A related task is cross-language information retrieval, where participants disambiguate in one language (e.g., with WordNet synsets) and retrieve documents in another language; standard information retrieval metrics are use to assess the quality of the disambiguation. New tasks include cross-lingual content-based recommendation (where user profiles are built to recommend items of interest in another language), examining semantic textual similarity with a view toward evaluating modular semantic components, and linking noun phrases across Wikipedia articles in different languages.<br />
* '''Word-sense induction''': comparison of sense-induction and discrimination systems. The task is to cluster corpus instances (word uses, rather than word senses) and to evaluate systems on how well they correspond to pre-existing sense inventories or to various sense mapping systems. New tasks are to provide an evaluation framework for web search result clustering, induction for graded or non-graded senses, and tags used in folksonomies.<br />
* '''Lexical substitution or simplification''': find an alternative substitute word or phrase for a target word in context. The task involves both finding the synonyms and disambiguating the context. It allows the use of any kind of lexical resource or technique, including word sense disambiguation and word sense induction. A cross-lingual task was also defined. This topic also includes textual entailment and paraphrasing tasks.<br />
* '''Evaluation of lexical resources''': the task evaluates the submitted lexical resources indirectly, running a simple WSD based on topic signatures (sets of words related to each target sense). A lexical sample tagged with English WordNet senses was used for evaluation. <br />
* '''Subcategorization acquistion''': semantically similar verbs are similar in terms of subcategorization frames. The task is to use any available method for disambiguating verb senses, so that the results can then be fed into automatic methods used for acquiring subcategorization frames, with the hypothesis that the disambiguation will cluster the instances.<br />
* '''Semantic role labeling''': identifying and labeling constituents of sentences with their semantic roles. The basic task began with attempts to replicate FrameNet data, specifically frame elements. This task has expanded to inferring and developing new frames and frame elements, in individual sentences and in full running texts, with identification of intersentential links and coreference chains. New tasks focus on extraction of spatial information from natural language (spatial role labeling) and the utility of semantic dependency parsing in semantic role labeling.<br />
* '''[[Semantic relation identification]]''': examining relations between lexical items in a sentence. The task, given a sample of semantic relation types, is to identify and classify semantic relations between nominals (i.e., nouns and base noun phrases, excluding named entities); a main purpose of this task is to assess different classification methods. Another task is, given a sentence and two tagged nominals, to predict the relation between those nominals and the direction of the relation. New tasks seek to measure the relational similarity between pairs of words, to extract drug-drug interactions from biomedical texts, and to develop methods in causal reasoning.<br />
* '''Metonymy resolution''': the figurative substitution of an attribute of a name for the thing specified. The task is a lexical sample task (1) to classify preselected expressions of a particular semantic class (such as country names) as having a literal or a metonymic reading, and if so, (2) to identify a further specification into prespecified metonymic patterns (such as place-for-event or company-for-stock) or, alternatively, recognition as an innovative reading. A second task is to identify when the arguments of a specified predicate does not satisfy selectional restrictions, and if not, to identify both the type mismatch and the type shift (coercion).<br />
* '''Temporal information processing''': the temporal location and order of events in newspaper articles, narratives, and similar texts. The task is to identify the events described in a text and locate these in time, i.e., identification of temporal referring expressions, events and temporal relations within a text. A further task requires systems to recognize which of a fixed set of temporal relations holds between (a) events and time expressions within the same sentence (b) events and the document creation time (c) main events in consecutive sentences, and (d) two events where one syntactically dominates the other.<br />
* '''Coreference resolution''': detection and resolution of coreferences. The task is to detect full coreference chains, composed by named entities, pronouns, and full noun phrases and to resolve pronouns, i.e., finding their antecedents.<br />
* '''Sentiment analysis''': emotion annotation, polarity orientation labeling. The task is to classify the titles of newspaper articles with the appropriate emotion label and/or with a valence indication (positive/negative), given a set of predefined six emotion labels (i.e., Anger, Disgust, Fear, Joy, Sadness, Surprise). A new task is to examine polarity in Twitter.<br />
This list is expected to grow as the field progresses.<br />
<br />
Some tasks are closely related to each other. For instance, word sense disambiguation (monolingual, multi-lingual and cross-lingual), word sense induction task, lexical substitution, subcategorization acquisition and evaluation of lexical resources are all related to word senses.<br />
<br />
==Organization==<br />
<br />
[http://www.clres.com/siglex.html SIGLEX, the ACL Special Interest Group on the Lexicon] is the umbrella organization for the SemEval semantic evaluations and the SENSEVAL word-sense evaluation exercises. [http://www.senseval.org/ SENSEVAL] is the home page for SENSEVAL 1-3. Each exercise is usually organized by two individuals, who make the call for tasks and handle the overall administration. Within the general guidelines, each task is then organized and run by individuals or groups.<br />
<br />
==SemEval on Wikipedia==<br />
<br />
On [http://en.wikipedia.org/wiki/Main_Page Wikipedia], a [http://en.wikipedia.org/wiki/SemEval SemEval] page had been created and it is calling for contributions and suggestions on how to improve the Wikipedia page and to further the understanding of computational semantics.<br />
<br />
==See also==<br />
<br />
* [[Semantics]]<br />
* [[Computational Semantics]]<br />
* [[Statistical Semantics]]<br />
* [[Semantics software for English]]<br />
<br />
<br />
[[Category:SemEval Portal]]</div>Kenskihttps://aclweb.org/aclwiki/index.php?title=SemEval_Portal&diff=9018SemEval Portal2011-10-10T16:24:10Z<p>Kenski: </p>
<hr />
<div>This page serves as a community portal for everything related to Semantic Evaluation ('''SemEval'''). <br />
<br />
Quick links:<br />
<br />
* [[SemEval 3]]<br />
* [http://www.clres.com/siglex.html SIGLEX]<br />
<br />
==Semantic Evaluation Exercises==<br />
<br />
'''SemEval''' (Semantic Evaluation) is an ongoing series of evaluations of [[Semantics|computational semantic analysis]] systems; it evolved from the Senseval [[Word sense disambiguation|Word sense]] evaluation series. The evaluations are intended to explore the nature of [[Semantics|meaning]] in language. While meaning is intuitive to humans, transferring those intuitions to computational analysis has proved elusive.<br />
<br />
This series of evaluations is providing a mechanism to characterize in more precise terms exactly what is necessary to compute in meaning. As such, the evaluations provide an emergent mechanism to identify the problems and solutions for computations with meaning. These exercises have evolved to articulate more of the dimensions that are involved in our use of language. They began with apparently simple attempts to identify [[Word sense disambiguation|word senses]] computationally. They have evolved to investigate the interrelationships among the elements in a sentence (e.g., [[semantic role labeling]]), relations between sentences (e.g., [[coreference]]), and the nature of what we are saying ([[semantic relations]] and [[sentiment analysis]]).<br />
<br />
The purpose of the SemEval exercises and [http://www.senseval.org SENSEVAL] is to evaluate semantic analysis systems. The first three evaluations, Senseval-1 through Senseval-3, were focused on word sense disambiguation, each time growing in the number of languages offered in the tasks and in the number of participating teams. Beginning with the 4th workshop, SemEval-2007 (SemEval-1), the nature of the tasks evolved to include semantic analysis tasks outside of word sense disambiguation. This portal will be used to provide a comprehensive view of the issues involved in semantic evaluations.<br />
<br />
==Upcoming and Past Events==<br />
<br />
{| border="1" cellpadding="7" cellspacing="0" <br />
|-<br />
! Event<br />
! Year<br />
! Location<br />
! Notes<br />
|-<br />
| [http://www.cs.york.ac.uk/semeval/ SemEval 3]<br />
| align="center" | 2013<br />
| to be determined<br />
| - discussion at [http://groups.google.com/group/semeval3 SemEval 3 Group]<br />
|-<br />
| [http://semeval2.fbk.eu/semeval2.php SemEval 2]<br />
| align="center" | 2010<br />
| Uppsala, Sweden<br />
| - [http://aclweb.org/anthology-new/S/S10/ proceedings]<br />
|-<br />
| [http://nlp.cs.swarthmore.edu/semeval/index.php SemEval 1]<br />
| align="center" | 2007 <br />
| Prague, Czech Republic<br />
| - [http://aclweb.org/anthology-new/S/S07/ proceedings] <br /> - copy of website at [http://web.archive.org/web/20080727062358/http://nlp.cs.swarthmore.edu/semeval/index.php Internet Archive]<br />
|-<br />
| [http://www.senseval.org/senseval3 SENSEVAL 3]<br />
| align="center" | 2004<br />
| Barcelona, Spain<br />
| - [http://aclweb.org/anthology-new/W/W04/#0800 proceedings]<br />
|-<br />
| [http://www.sle.sharp.co.uk/senseval2 SENSEVAL 2]<br />
| align="center" | 2001<br />
| Toulouse, France<br />
| - main link provides links to results, data, system descriptions, task descriptions, and workshop program <br /> - copy of website at [http://web.archive.org/web/20050507011044/http://www.sle.sharp.co.uk/senseval2/ Internet Archive]<br />
|-<br />
| [http://www.itri.brighton.ac.uk/events/senseval/ARCHIVE/index.html SENSEVAL 1]<br />
| align="center" | 1998<br />
| East Sussex, UK<br />
| - papers in [http://www.springerlink.com/content/0010-4817/34/1-2/ Computers and the Humanities], subscribers or pay per view<br />
|-<br />
|}<br />
<br />
==Overview of Issues in Semantic Analysis==<br />
<br />
The SemEval exercises provide a mechanism for examining issues in semantic analysis of texts. The topics of interest are concerned with identifying and characterizing the kinds of issues relevant to human understanding of language; the topics are generally different from the concerns of the logic-based approach of formal computational semantics. The primary goal is to replicate human processing by means of computer systems. The tasks (shown below) are developed by individuals and groups to deal with identifiable issues, as they take on some concrete form.<br />
<br />
The first major area in semantic analysis is the identification of the intended meaning at the word level (taken to include idiomatic expressions). This is word-sense disambiguation (a concept that is evolving away from the notion that words have discrete senses, but rather are characterized by the ways in which they are used, i.e., their contexts). The tasks in this area include lexical sample and all-word disambiguation, multi- and cross-lingual disambiguation, and lexical substitution. Given the difficulties of identifying word senses, other tasks relevant to this topic include word-sense induction, subcategorization acquisition, and evaluation of lexical resources. The tasks in this area may be characterized as dealing with dictionary issues.<br />
<br />
The second major area in semantic analysis is the understanding of how different sentence and textual elements fit together. Tasks in this area include semantic role labeling, semantic relation analysis, and coreference resolution. Other tasks in this area look at more specialized issues of semantic analysis, such as temporal information processing, metonymy resolution, and sentiment analysis. The tasks in this area have many potential applications, such as information extraction, question answering, document summarization, machine translation, construction of thesauri and semantic networks, language modeling, paraphrasing, and recognizing textual entailment. In each of these potential applications, the contribution of the types of semantic analysis constitutes the most outstanding research issue.<br />
<br />
==Tasks in Semantic Evaluation==<br />
<br />
The major tasks in semantic evaluation include:<br />
* '''[[Word sense disambiguation]]''': WSD, lexical sample and all-words, the process of identifying which [[Word sense disambiguation|sense]] of a word (i.e. [[Semantics|meaning]]) is used in a [[Sentence (linguistics)|sentence]], when the word has multiple meanings ([[polysemy]]). The WSD task has two variants: "[[lexical sample task|lexical sample]]" and "[[all-words task|all words]]" task. The former comprises disambiguating the occurrences of a small sample of target words which were previously selected, while in the latter all the words in a piece of running text need to be disambiguated. Tasks have been performed for many languages. Tasks have covered disambiguation of nouns, verbs, adjectives, and prepositions.<br />
* '''Multi-lingual or cross-lingual word-sense disambiguation''': word senses are defined according to translation distinctions, e.g., a polysemous word in Japanese is translated differently in a given context. The WSD task provides texts with target words and requires identification of the appropriate translation. A related task is cross-language information retrieval, where participants disambiguate in one language (e.g., with WordNet synsets) and retrieve documents in another language; standard information retrieval metrics are use to assess the quality of the disambiguation.<br />
* '''Word-sense induction''': comparison of sense-induction and discrimination systems. The task is to cluster corpus instances (word uses, rather than word senses) and to evaluate systems on how well they correspond to pre-existing sense inventories or to various sense mapping systems.<br />
* '''Lexical substitution''': find an alternative substitute word or phrase for a target word in context. The task involves both finding the synonyms and disambiguating the context. It allows the use of any kind of lexical resource or technique, including word sense disambiguation and word sense induction. A cross-lingual task was also defined.<br />
* '''Evaluation of lexical resources''': the task evaluates the submitted lexical resources indirectly, running a simple WSD based on topic signatures (sets of words related to each target sense). A lexical sample tagged with English WordNet senses was used for evaluation. <br />
* '''Subcategorization acquistion''': semantically similar verbs are similar in terms of subcategorization frames. The task is to use any available method for disambiguating verb senses, so that the results can then be fed into automatic methods used for acquiring subcategorization frames, with the hypothesis that the disambiguation will cluster the instances.<br />
* '''Semantic role labeling''': identifying and labeling constituents of sentences with their semantic roles. The basic task began with attempts to replicate FrameNet data, specifically frame elements. This task has expanded to inferring and developing new frames and frame elements, in individual sentences and in full running texts, with identification of intersentential links and coreference chains.<br />
* '''[[Semantic relation identification]]''': examining relations between lexical items in a sentence. The task, given a sample of semantic relation types, is to identify and classify semantic relations between nominals (i.e., nouns and base noun phrases, excluding named entities); a main purpose of this task is to assess different classification methods. Another task is, given a sentence and two tagged nominals, to predict the relation between those nominals and the direction of the relation.<br />
* '''Metonymy resolution''': the figurative substitution of an attribute of a name for the thing specified. The task is a lexical sample task (1) to classify preselected expressions of a particular semantic class (such as country names) as having a literal or a metonymic reading, and if so, (2) to identify a further specification into prespecified metonymic patterns (such as place-for-event or company-for-stock) or, alternatively, recognition as an innovative reading. A second task is to identify when the arguments of a specified predicate does not satisfy selectional restrictions, and if not, to identify both the type mismatch and the type shift (coercion).<br />
* '''Temporal information processing''': the temporal location and order of events in newspaper articles, narratives, and similar texts. The task is to identify the events described in a text and locate these in time, i.e., identification of temporal referring expressions, events and temporal relations within a text. A further task requires systems to recognize which of a fixed set of temporal relations holds between (a) events and time expressions within the same sentence (b) events and the document creation time (c) main events in consecutive sentences, and (d) two events where one syntactically dominates the other.<br />
* '''Coreference resolution''': detection and resolution of coreferences. The task is to detect full coreference chains, composed by named entities, pronouns, and full noun phrases and to resolve pronouns, i.e., finding their antecedents.<br />
* '''Sentiment analysis''': emotion annotation, polarity orientation labeling. The task is to classify the titles of newspaper articles with the appropriate emotion label and/or with a valence indication (positive/negative), given a set of predefined six emotion labels (i.e., Anger, Disgust, Fear, Joy, Sadness, Surprise).<br />
This list is expected to grow as the field progresses.<br />
<br />
Some tasks are closely related to each other. For instance, word sense disambiguation (monolingual, multi-lingual and cross-lingual), word sense induction task, lexical substitution, subcategorization acquisition and evaluation of lexical resources are all related to word senses.<br />
<br />
==Organization==<br />
<br />
[http://www.clres.com/siglex.html SIGLEX, the ACL Special Interest Group on the Lexicon] is the umbrella organization for the SemEval semantic evaluations and the SENSEVAL word-sense evaluation exercises. [http://www.senseval.org/ SENSEVAL] is the home page for SENSEVAL 1-3. Each exercise is usually organized by two individuals, who make the call for tasks and handle the overall administration. Within the general guidelines, each task is then organized and run by individuals or groups.<br />
<br />
==SemEval on Wikipedia==<br />
<br />
On [http://en.wikipedia.org/wiki/Main_Page Wikipedia], a [http://en.wikipedia.org/wiki/SemEval SemEval] page had been created and it is calling for contributions and suggestions on how to improve the Wikipedia page and to further the understanding of computational semantics.<br />
<br />
==See also==<br />
<br />
* [[Semantics]]<br />
* [[Computational Semantics]]<br />
* [[Statistical Semantics]]<br />
* [[Semantics software for English]]<br />
<br />
<br />
[[Category:SemEval Portal]]</div>Kenskihttps://aclweb.org/aclwiki/index.php?title=SemEval_Portal&diff=8519SemEval Portal2010-12-03T17:46:04Z<p>Kenski: /* Overview of Issues in Semantic Analysis */</p>
<hr />
<div>__FORCETOC__<br />
This page serves as a community portal for everything related to Semantic Evaluation ('''SemEval'''). <br />
==Semantic Evaluation Exercises==<br />
<br />
'''SemEval''' (Semantic Evaluation) is an ongoing series of evaluations of [[Semantics|computational semantic analysis]] systems; it evolved from the Senseval [[Word sense disambiguation|Word sense]] evaluation series. The evaluations are intended to explore the nature of [[Semantics|meaning]] in language. While meaning is intuitive to humans, transferring those intuitions to computational analysis has proved elusive.<br />
<br />
This series of evaluations is providing a mechanism to characterize in more precise terms exactly what is necessary to compute in meaning. As such, the evaluations provide an emergent mechanism to identify the problems and solutions for computations with meaning. These exercises have evolved to articulate more of the dimensions that are involved in our use of language. They began with apparently simple attempts to identify [[Word sense disambiguation|word senses]] computationally. They have evolved to investigate the interrelationships among the elements in a sentence (e.g., [[semantic role labeling]]), relations between sentences (e.g., [[coreference]]), and the nature of what we are saying ([[semantic relations]] and [[sentiment analysis]]).<br />
<br />
The purpose of the SemEval exercises and [http://www.senseval.org SENSEVAL] is to evaluate semantic analysis systems. The first three evaluations, Senseval-1 through Senseval-3, were focused on word sense disambiguation, each time growing in the number of languages offered in the tasks and in the number of participating teams. Beginning with the 4th workshop, SemEval-2007 (SemEval-1), the nature of the tasks evolved to include semantic analysis tasks outside of word sense disambiguation. This portal will be used to provide a comprehensive view of the issues involved in semantic evaluations.<br />
<br />
==Upcoming and Past Events==<br />
<br />
{| border="1" cellpadding="7" cellspacing="0" <br />
|-<br />
! Event<br />
! Year<br />
! Location<br />
! Notes<br />
|-<br />
| [[SemEval 3]]<br />
| align="center" | to be determined<br />
| to be determined<br />
| - discussion at [http://groups.google.com/group/semeval3 SemEval 3 Group]<br />
|-<br />
| [http://semeval2.fbk.eu/semeval2.php SemEval 2]<br />
| align="center" | 2010<br />
| Uppsala, Sweden<br />
| - [http://aclweb.org/anthology-new/S/S10/ proceedings]<br />
|-<br />
| [http://nlp.cs.swarthmore.edu/semeval/index.php SemEval 1]<br />
| align="center" | 2007 <br />
| Prague, Czech Republic<br />
| - [http://aclweb.org/anthology-new/S/S07/ proceedings] <br /> - copy of website at [http://web.archive.org/web/20080727062358/http://nlp.cs.swarthmore.edu/semeval/index.php Internet Archive]<br />
|-<br />
| [http://www.senseval.org/senseval3 SENSEVAL 3]<br />
| align="center" | 2004<br />
| Barcelona, Spain<br />
| - [http://aclweb.org/anthology-new/W/W04/#0800 proceedings]<br />
|-<br />
| [http://www.sle.sharp.co.uk/senseval2 SENSEVAL 2]<br />
| align="center" | 2001<br />
| Toulouse, France<br />
| - main link provides links to results, data, system descriptions, task descriptions, and workshop program <br /> - copy of website at [http://web.archive.org/web/20050507011044/http://www.sle.sharp.co.uk/senseval2/ Internet Archive]<br />
|-<br />
| [http://www.itri.brighton.ac.uk/events/senseval/ARCHIVE/index.html SENSEVAL 1]<br />
| align="center" | 1998<br />
| East Sussex, UK<br />
| - papers in [http://www.springerlink.com/content/0010-4817/34/1-2/ Computers and the Humanities], subscribers or pay per view<br />
|-<br />
|}<br />
<br />
<br />
==Overview of Issues in Semantic Analysis==<br />
<br />
The SemEval exercises provide a mechanism for examining issues in semantic analysis of texts. The topics of interest fall short of the logical rigor that is found in formal computational semsntics, attempting to identify and characterize the kinds of issues relevant to human understanding of language. The primary goal is to replicate human processing by means of computer systems. The tasks (shown below) are developed by individuals and groups to deal with identifiable issues, as they take on some concrete form.<br />
<br />
The first major area in semantic analysis is the identification of the intended meaning at the word level (taken to include idiomatic expressions). This is word-sense disambiguation (a concept that is evolving away from the notion that words have discrete senses, but rather are characterized by the ways in which they are used, i.e., their contexts). The tasks in this area include lexical sample and all-word disambiguation, multi- and cross-lingual disambiguation, and lexical substitution. Given the difficulties of identifying word senses, other tasks relevant to this topic include word-sense induction, subcategorization acquisition, and evaluation of lexical resources. The tasks in this area may be characterized as dealing with dictionary issues.<br />
<br />
The second major area in semantic analysis is the understanding of how different sentence and textual elements fit together. Tasks in this area include semantic role labeling, semantic relation analysis, and coreference resolution. Other tasks in this area look at more specialized issues of semantic analysis, such as temporal information processing, metonymy resolution, and sentiment analysis. The tasks in this area have many potential applications, such as information extraction, question answering, document summarization, machine translation, construction of thesauri and semantic networks, language modeling, paraphrasing,<br />
and recognizing textual entailment. In each of these potential applications, the contribution of the types of semantic analysis constitutes the most outstanding research issue.<br />
<br />
==Tasks in Semantic Evaluation==<br />
<br />
The major tasks in semantic evaluation include:<br />
* '''[[Word sense disambiguation]]''': WSD, lexical sample and all-words, the process of identifying which [[Word sense disambiguation|sense]] of a word (i.e. [[Semantics|meaning]]) is used in a [[Sentence (linguistics)|sentence]], when the word has multiple meanings ([[polysemy]]). The WSD task has two variants: "[[lexical sample task|lexical sample]]" and "[[all-words task|all words]]" task. The former comprises disambiguating the occurrences of a small sample of target words which were previously selected, while in the latter all the words in a piece of running text need to be disambiguated. Tasks have been performed for many languages. Tasks have covered disambiguation of nouns, verbs, adjectives, and prepositions.<br />
* '''Multi-lingual or cross-lingual word-sense disambiguation''': word senses are defined according to translation distinctions, e.g., a polysemous word in Japanese is translated differently in a given context. The WSD task provides texts with target words and requires identification of the appropriate translation. A related task is cross-language information retrieval, where participants disambiguate in one language (e.g., with WordNet synsets) and retrieve documents in another language; standard information retrieval metrics are use to assess the quality of the disambiguation.<br />
* '''Word-sense induction''': comparison of sense-induction and discrimination systems. The task is to cluster corpus instances (word uses, rather than word senses) and to evaluate systems on how well they correspond to pre-existing sense inventories or to various sense mapping systems.<br />
* '''Lexical substitution''': find an alternative substitute word or phrase for a target word in context. The task involves both finding the synonyms and disambiguating the context. It allows the use of any kind of lexical resource or technique, including word sense disambiguation and word sense induction. A cross-lingual task was also defined.<br />
* '''Evaluation of lexical resources''': the task evaluates the submitted lexical resources indirectly, running a simple WSD based on topic signatures (sets of words related to each target sense). A lexical sample tagged with English WordNet senses was used for evaluation. <br />
* '''Subcategorization acquistion''': semantically similar verbs are similar in terms of subcategorization frames. The task is to use any available method for disambiguating verb senses, so that the results can then be fed into automatic methods used for acquiring subcategorization frames, with the hypothesis that the disambiguation will cluster the instances.<br />
* '''Semantic role labeling''': identifying and labeling constituents of sentences with their semantic roles. The basic task began with attempts to replicate FrameNet data, specifically frame elements. This task has expanded to inferring and developing new frames and frame elements, in individual sentences and in full running texts, with identification of intersentential links and coreference chains.<br />
* '''Semantic relation identification''': examining relations between lexical items in a sentence. The task, given a sample of semantic relation types, is to identify and classify semantic relations between nominals(i.e., nouns and base noun phrases, excluding named entities); a main purpose of this task is to assess different classification methods. Another task is, given a sentence and two tagged nominals, to predict the relation between those nominals and the direction of the relation.<br />
* '''Metonymy resolution''': the figurative substitution of an attribute of a name for the thing specified. The task is a lexical sample task (1) to classify preselected expressions of a particular semantic class (such as country names) as having a literal or a metonymic reading, and if so, (2) to identify a further specification into prespecified metonymic patterns (such as place-for-event or company-for-stock) or, alternatively, recognition as an innovative reading. A second task is to identify when the arguments of a specified predicate does not satisfy selectional restrictions, and if not, to identify both the type mismatch and the type shift (coercion).<br />
* '''Temporal information processing''': the temporal location and order of events in newspaper articles, narratives, and similar texts. The task is to identify the events described in a text and locate these in time, i.e., identification of temporal referring expressions, events and temporal relations within a text. A further task requires systems to recognize which of a fixed set of temporal relations holds between (a) events and time expressions within the same sentence (b) events and the document creation time (c) main events in consecutive sentences, and (d) two events where one syntactically dominates the other.<br />
* '''Coreference resolution''': detection and resolution of coreferences. The task is to detect full coreference chains, composed by named entities, pronouns, and full noun phrases and to resolve pronouns, i.e., finding their antecedents.<br />
* '''Sentiment analysis''': emotion annotation, polarity orientation labeling. The task is to classify the titles of newspaper articles with the appropriate emotion label and/or with a valence indication (positive/negative), given a set of predefined six emotion labels (i.e., Anger, Disgust, Fear, Joy, Sadness, Surprise).<br />
This list is expected to grow as the field progresses.<br />
<br />
Some tasks are closely related to each other. For instance, word sense disambiguation (monolingual, multi-lingual and cross-lingual), word sense induction task, lexical substitution, subcategorization acquisition and evaluation of lexical resources are all related to word senses.<br />
<br />
==Organization==<br />
<br />
[http://www.clres.com/siglex.html SIGLEX, the ACL Special Interest Group on the Lexicon] is the umbrella organization for the SemEval semantic evaluations and the SENSEVAL word-sense evaluation exercises. [http://www.senseval.org/ SENSEVAL] is the home page for SENSEVAL 1-3. Each exercise is usually organized by two individuals, who make the call for tasks and handle the overall administration. Within the general guidelines, each task is then organized and run by individuals or groups.<br />
<br />
==SemEval on Wikipedia==<br />
<br />
On [http://en.wikipedia.org/wiki/Main_Page Wikipedia], a [http://en.wikipedia.org/wiki/SemEval SemEval] page had been created and it is calling for contributions and suggestions on how to improve the Wikipedia page and to further the understanding of computational semantics.<br />
<br />
==See also==<br />
<br />
* [[Semantics]]<br />
* [[Computational Semantics]]<br />
* [[Statistical Semantics]]<br />
* [[Semantics software for English]]<br />
<br />
<br />
[[Category:SemEval Portal]]</div>Kenskihttps://aclweb.org/aclwiki/index.php?title=Word_sense_disambiguation&diff=8476Word sense disambiguation2010-11-30T20:32:30Z<p>Kenski: </p>
<hr />
<div>'''Word Sense Disambiguation''' (WSD) is the ability of software to distinguish what sense of a word is being used in a textual context. For a basic introduction to WSD, see [http://en.wikipedia.org/wiki/Word_sense_disambiguation Wikipedia's introduction to WSD]. For an up-to-date state of the art of the field, see [http://www.dsi.uniroma1.it/~navigli/pubs/ACM_Survey_2009_Navigli.pdf Navigli, (2009)].<br />
<br />
In modern WSD systems, the senses of a word are typically taken from some specified dictionary. In earlier systems the senses were more typically generic senses selected by the originators of the system. These days [[WordNet]] is the usual dictionary in question. WSD has been investigated in computational linguistics as a specific task for well over 40 years, though the acronym is newer. The SENSEVAL conferences have attempted to put Word Sense Disambiguation on an empirically measurable basis by hosting evaluations in which a given corpus of tagged word senses are created using [[WordNet]]'s senses and participants attempt to recognize those senses after tuning their systems with a corpus of training data.<br />
<br />
Word Sense Disambiguation has several debates within the field as to whether the senses offered in existing dictionaries are adequate to distinguish the subtle meanings used in text contexts and how to evaluate the overall performance of a WSD system. For example, does it make sense to describe an overall percentage accuracy for a WSD system or does evaluation require specific comparison of system performance on a word by word basis. <br />
<br />
==History==<br />
Among the earliest efforts at word sense disambiguation was the work of Kelly and Stone (1975) who published a book explicitly listing their rules for disambiguation of word senses.<br />
<br />
== See also ==<br />
* [[Word Sense Disambiguation (State of the art)]]<br />
<br />
== External links ==<br />
<br />
* [http://en.wikipedia.org/wiki/Word_sense_disambiguation Wikipedia's introduction to WSD]<br />
* [http://www.scholarpedia.org/article/Word_sense_disambiguation Word sense disambiguation in Scholarpedia]<br />
<br />
== References ==<br />
<br />
* Roberto Navigli. [http://www.dsi.uniroma1.it/~navigli/pubs/ACM_Survey_2009_Navigli.pdf ''Word Sense Disambiguation: A Survey]'', ACM Computing Surveys, 41(2), 2009, pp.&nbsp;1–69.<br />
* Kelly, Edward F., and Stone, Philip J. (1975), ''Computer Recognition of English Word Senses'', Amsterdam: North-Holland. ISBN 0-444-10831-9<br />
<br />
[[Category:Word sense disambiguation|*]]</div>Kenskihttps://aclweb.org/aclwiki/index.php?title=SemEval_Portal&diff=8471SemEval Portal2010-11-27T20:15:02Z<p>Kenski: </p>
<hr />
<div>__FORCETOC__<br />
This page serves as a community portal for everything related to Semantic Evaluation ('''SemEval'''). <br />
==Semantic Evaluation Exercises==<br />
<br />
'''SemEval''' (Semantic Evaluation) is an ongoing series of evaluations of [[Semantics|computational semantic analysis]] systems; it evolved from the Senseval [[Word sense disambiguation|Word sense]] evaluation series. The evaluations are intended to explore the nature of [[Semantics|meaning]] in language. While meaning is intuitive to humans, transferring those intuitions to computational analysis has proved elusive.<br />
<br />
This series of evaluations is providing a mechanism to characterize in more precise terms exactly what is necessary to compute in meaning. As such, the evaluations provide an emergent mechanism to identify the problems and solutions for computations with meaning. These exercises have evolved to articulate more of the dimensions that are involved in our use of language. They began with apparently simple attempts to identify [[Word sense disambiguation|word senses]] computationally. They have evolved to investigate the interrelationships among the elements in a sentence (e.g., [[semantic role labeling]]), relations between sentences (e.g., [[coreference]]), and the nature of what we are saying ([[semantic relations]] and [[sentiment analysis]]).<br />
<br />
The purpose of the SemEval exercises and [http://www.senseval.org SENSEVAL] is to evaluate semantic analysis systems. The first three evaluations, Senseval-1 through Senseval-3, were focused on word sense disambiguation, each time growing in the number of languages offered in the tasks and in the number of participating teams. Beginning with the 4th workshop, SemEval-2007 (SemEval-1), the nature of the tasks evolved to include semantic analysis tasks outside of word sense disambiguation. This portal will be used to provide a comprehensive view of the issues involved in semantic evaluations.<br />
<br />
==Upcoming and Past Events==<br />
<br />
{| border="1" cellpadding="7" cellspacing="0" <br />
|-<br />
! Event<br />
! Year<br />
! Location<br />
! Notes<br />
|-<br />
| [[SemEval 3]]<br />
| align="center" | to be determined<br />
| to be determined<br />
| - discussion at [http://groups.google.com/group/semeval3 SemEval 3 Group]<br />
|-<br />
| [http://semeval2.fbk.eu/semeval2.php SemEval 2]<br />
| align="center" | 2010<br />
| Uppsala, Sweden<br />
| - [http://aclweb.org/anthology-new/S/S10/ proceedings]<br />
|-<br />
| [http://nlp.cs.swarthmore.edu/semeval/index.php SemEval 1]<br />
| align="center" | 2007 <br />
| Prague, Czech Republic<br />
| - [http://aclweb.org/anthology-new/S/S07/ proceedings] <br /> - copy of website at [http://web.archive.org/web/20080727062358/http://nlp.cs.swarthmore.edu/semeval/index.php Internet Archive]<br />
|-<br />
| [http://www.senseval.org/senseval3 SENSEVAL 3]<br />
| align="center" | 2004<br />
| Barcelona, Spain<br />
| - [http://aclweb.org/anthology-new/W/W04/#0800 proceedings]<br />
|-<br />
| [http://www.sle.sharp.co.uk/senseval2 SENSEVAL 2]<br />
| align="center" | 2001<br />
| Toulouse, France<br />
| - main link provides links to results, data, system descriptions, task descriptions, and workshop program <br /> - copy of website at [http://web.archive.org/web/20050507011044/http://www.sle.sharp.co.uk/senseval2/ Internet Archive]<br />
|-<br />
| [http://www.itri.brighton.ac.uk/events/senseval/ARCHIVE/index.html SENSEVAL 1]<br />
| align="center" | 1998<br />
| East Sussex, UK<br />
| - papers in [http://www.springerlink.com/content/0010-4817/34/1-2/ Computers and the Humanities], subscribers or pay per view<br />
|-<br />
|}<br />
<br />
<br />
==Overview of Issues in Semantic Analysis==<br />
<br />
The SemEval exercises provide a mechanism for examining issues in semantic analysis of texts. The topics of interest fall short of the logical rigor that is found in formal computational semsntics, attempting to identify and characterize the kinds of issues relevant to human understanding of language. The primary goal is to replicate human processing by means of computer systems. The tasks (shown below) are developed by individuals and groups to deal with identifiable issues, as they take on some concrete form.<br />
<br />
The first major area in semantic analysis is the identification of the intended meaning at the word level (taken to include idiomatic expressions). This is word-sense disambiguation (a concept that is evolving away from the notion that words have discrete senses, but rather are characterized by the ways in which they are used, i.e., their contexts). The tasks in this area include lexical sample and all-word disambiguation, multi- and cross-lingual disambiguation, and lexical substitution. Given the difficulties of identifying word senses, other tasks relevant to this topic include word-sense induction, subcategorization acquisition, and evaluation of lexical resources.<br />
<br />
The second major area in semantic analysis is the understanding of how different sentence and textual elements fit together. Tasks in this area include semantic role labeling, semantic relation analysis, and coreference resolution. Other tasks in this area look at more specialized issues of semantic analysis, such as temporal information processing, metonymy resolution, and sentiment analysis. The tasks in this area have many potential applications, such as information extraction, question answering, document summarization, machine translation, construction of thesauri and semantic networks, language modeling, paraphrasing,<br />
and recognizing textual entailment. In each of these potential applications, the contribution of the types of semantic analysis constitutes the most outstanding research issue.<br />
<br />
==Tasks in Semantic Evaluation==<br />
<br />
The major tasks in semantic evaluation include:<br />
* '''[[Word sense disambiguation]]''': WSD, lexical sample and all-words, the process of identifying which [[Word sense disambiguation|sense]] of a word (i.e. [[Semantics|meaning]]) is used in a [[Sentence (linguistics)|sentence]], when the word has multiple meanings ([[polysemy]]). The WSD task has two variants: "[[lexical sample task|lexical sample]]" and "[[all-words task|all words]]" task. The former comprises disambiguating the occurrences of a small sample of target words which were previously selected, while in the latter all the words in a piece of running text need to be disambiguated. Tasks have been performed for many languages. Tasks have covered disambiguation of nouns, verbs, adjectives, and prepositions.<br />
* '''Multi-lingual or cross-lingual word-sense disambiguation''': word senses are defined according to translation distinctions, e.g., a polysemous word in Japanese is translated differently in a given context. The WSD task provides texts with target words and requires identification of the appropriate translation. A related task is cross-language information retrieval, where participants disambiguate in one language (e.g., with WordNet synsets) and retrieve documents in another language; standard information retrieval metrics are use to assess the quality of the disambiguation.<br />
* '''Word-sense induction''': comparison of sense-induction and discrimination systems. The task is to cluster corpus instances (word uses, rather than word senses) and to evaluate systems on how well they correspond to pre-existing sense inventories or to various sense mapping systems.<br />
* '''Lexical substitution''': find an alternative substitute word or phrase for a target word in context. The task involves both finding the synonyms and disambiguating the context. It allows the use of kind of lexical resource or technique, including word sense disambiguation and word sense induction. A cross-lingual task was also defined.<br />
* '''Evaluation of lexical resources''': the task evaluates the submitted lexical resources indirectly, running a simple WSD based on topic signatures (sets of words related to each target sense). A lexical sample tagged with English WordNet senses was used for evaluation. <br />
* '''Subcategorization acquistion''': semantically similar verbs are similar in terms of subcategorization frames. The task is to use any available method for disambiguating verb senses, so that the results can then be fed into automatic methods used for acquiring subcategorization frames, with the hypothesis that the disambiguation will cluster the instances.<br />
* '''Semantic role labeling''': identifying and labeling constituents of sentences with their semantic roles. The basic task began with attempts to replicate FrameNet data, specifically frame elements. This task has expanded to inferring and developing new frames and frame elements, in individual sentences and in full running texts, with identification of intersentential links and coreference chains.<br />
* '''Semantic relation identification''': examining relations between lexical items in a sentence. The task, given a sample of semantic relation types, is to identify and classify semantic relations between nominals(i.e., nouns and base noun phrases, excluding named entities); a main purpose of this task is to assess different classification methods. Another task is, given a sentence and two tagged nominals, to predict the relation between those nominals and the direction of the relation.<br />
* '''Metonymy resolution''': the figurative substitution of an attribute of a name for the thing specified. The task is a lexical sample task (1) to classify preselected expressions of a particular semantic class (such as country names) as having a literal or a metonymic reading, and if so, (2) to identify a further specification into prespecified metonymic patterns (such as place-for-event or company-for-stock) or, alternatively, recognition as an innovative reading. A second task is to identify when the arguments of a specified predicate does not satisfy selectional restrictions, and if not, to identify both the type mismatch and the type shift (coercion).<br />
* '''Temporal information processing''': the temporal location and order of events in newspaper articles, narratives, and similar texts. The task is to identify the events described in a text and locate these in time, i.e., identification of temporal referring expressions, events and temporal relations within a text. A further task requires systems to recognize which of a fixed set of temporal relations holds between (a) events and time expressions within the same sentence (b) events and the document creation time (c) main events in consecutive sentences, and (d) two events where one syntactically dominates the other.<br />
* '''Coreference resolution''': detection and resolution of coreferences. The task is to detect full coreference chains, composed by named entities, pronouns, and full noun phrases and to resolve pronouns, i.e., finding their antecedents.<br />
* '''Sentiment analysis''': emotion annotation, polarity orientation labeling. The task is to classify the titles of newspaper articles with the appropriate emotion label and/or with a valence indication (positive/negative), given a set of predefined six emotion labels (i.e., Anger, Disgust, Fear, Joy, Sadness, Surprise).<br />
This list is expected to grow as the field progresses.<br />
<br />
Some tasks are closely related to each other. For instance, word sense disambiguation (monolingual, multi-lingual and cross-lingual), word sense induction task, lexical substitution and evaluation of lexical resources are all related to word senses.<br />
<br />
==Organization==<br />
<br />
[http://www.clres.com/siglex.html SIGLEX, the ACL Special Interest Group on the Lexicon] is the umbrella organization for the SemEval semantic evaluations and the SENSEVAL word-sense evaluation exercises. [http://www.senseval.org/ SENSEVAL] is the home page for SENSEVAL 1-3. Each exercise is usually organized by two individuals, who make the call for tasks and handle the overall administration. Within the general guidelines, each task is then organized and run by individuals or groups.<br />
<br />
==SemEval on Wikipedia==<br />
<br />
On [http://en.wikipedia.org/wiki/Main_Page Wikipedia], a [http://en.wikipedia.org/wiki/SemEval SemEval] page had been created and it is calling for contributions and suggestions on how to improve the Wikipedia page and to further the understanding of computational semantics.<br />
<br />
==See also==<br />
<br />
* [[Semantics]]<br />
* [[Computational Semantics]]<br />
* [[Statistical Semantics]]<br />
* [[Semantics software for English]]<br />
<br />
<br />
[[Category:SemEval Portal]]</div>Kenskihttps://aclweb.org/aclwiki/index.php?title=SemEval_Portal&diff=8470SemEval Portal2010-11-27T19:36:31Z<p>Kenski: </p>
<hr />
<div>__FORCETOC__<br />
This page serves as a community portal for everything related to Semantic Evaluation ('''SemEval'''). <br />
==Semantic Evaluation Exercises==<br />
<br />
'''SemEval''' (Semantic Evaluation) is an ongoing series of evaluations of [[Semantics|computational semantic analysis]] systems; it evolved from the Senseval [[Word sense disambiguation|Word sense]] evaluation series. The evaluations are intended to explore the nature of [[Semantics|meaning]] in language. While meaning is intuitive to humans, transferring those intuitions to computational analysis has proved elusive.<br />
<br />
This series of evaluations is providing a mechanism to characterize in more precise terms exactly what is necessary to compute in meaning. As such, the evaluations provide an emergent mechanism to identify the problems and solutions for computations with meaning. These exercises have evolved to articulate more of the dimensions that are involved in our use of language. They began with apparently simple attempts to identify [[Word sense disambiguation|word senses]] computationally. They have evolved to investigate the interrelationships among the elements in a sentence (e.g., [[semantic role labeling]]), relations between sentences (e.g., [[coreference]]), and the nature of what we are saying ([[semantic relations]] and [[sentiment analysis]]).<br />
<br />
The purpose of the SemEval exercises and [http://www.senseval.org SENSEVAL] is to evaluate semantic analysis systems. The first three evaluations, Senseval-1 through Senseval-3, were focused on word sense disambiguation, each time growing in the number of languages offered in the tasks and in the number of participating teams. Beginning with the 4th workshop, SemEval-2007 (SemEval-1), the nature of the tasks evolved to include semantic analysis tasks outside of word sense disambiguation. This portal will be used to provide a comprehensive view of the issues involved in semantic evaluations.<br />
<br />
==Upcoming and Past Events==<br />
<br />
{| border="1" cellpadding="7" cellspacing="0" <br />
|-<br />
! Event<br />
! Year<br />
! Location<br />
! Notes<br />
|-<br />
| [[SemEval 3]]<br />
| align="center" | to be determined<br />
| to be determined<br />
| - discussion at [http://groups.google.com/group/semeval3 SemEval 3 Group]<br />
|-<br />
| [http://semeval2.fbk.eu/semeval2.php SemEval 2]<br />
| align="center" | 2010<br />
| Uppsala, Sweden<br />
| - [http://aclweb.org/anthology-new/S/S10/ proceedings]<br />
|-<br />
| [http://nlp.cs.swarthmore.edu/semeval/index.php SemEval 1]<br />
| align="center" | 2007 <br />
| Prague, Czech Republic<br />
| - [http://aclweb.org/anthology-new/S/S07/ proceedings] <br /> - copy of website at [http://web.archive.org/web/20080727062358/http://nlp.cs.swarthmore.edu/semeval/index.php Internet Archive]<br />
|-<br />
| [http://www.senseval.org/senseval3 SENSEVAL 3]<br />
| align="center" | 2004<br />
| Barcelona, Spain<br />
| - [http://aclweb.org/anthology-new/W/W04/#0800 proceedings]<br />
|-<br />
| [http://www.sle.sharp.co.uk/senseval2 SENSEVAL 2]<br />
| align="center" | 2001<br />
| Toulouse, France<br />
| - main link provides links to results, data, system descriptions, task descriptions, and workshop program <br /> - copy of website at [http://web.archive.org/web/20050507011044/http://www.sle.sharp.co.uk/senseval2/ Internet Archive]<br />
|-<br />
| [http://www.itri.brighton.ac.uk/events/senseval/ARCHIVE/index.html SENSEVAL 1]<br />
| align="center" | 1998<br />
| East Sussex, UK<br />
| - papers in [http://www.springerlink.com/content/0010-4817/34/1-2/ Computers and the Humanities], subscribers or pay per view<br />
|-<br />
|}<br />
<br />
<br />
==Tasks in Semantic Evaluation==<br />
<br />
The major tasks in semantic evaluation include:<br />
* '''[[Word sense disambiguation]]''': WSD, lexical sample and all-words, the process of identifying which [[Word sense disambiguation|sense]] of a word (i.e. [[Semantics|meaning]]) is used in a [[Sentence (linguistics)|sentence]], when the word has multiple meanings ([[polysemy]]). The WSD task has two variants: "[[lexical sample task|lexical sample]]" and "[[all-words task|all words]]" task. The former comprises disambiguating the occurrences of a small sample of target words which were previously selected, while in the latter all the words in a piece of running text need to be disambiguated. Tasks have been performed for many languages. Tasks have covered disambiguation of nouns, verbs, adjectives, and prepositions.<br />
* '''Multi-lingual or cross-lingual word-sense disambiguation''': word senses are defined according to translation distinctions, e.g., a polysemous word in Japanese is translated differently in a given context. The WSD task provides texts with target words and requires identification of the appropriate translation. A related task is cross-language information retrieval, where participants disambiguate in one language (e.g., with WordNet synsets) and retrieve documents in another language; standard information retrieval metrics are use to assess the quality of the disambiguation.<br />
* '''Word-sense induction''': comparison of sense-induction and discrimination systems. The task is to cluster corpus instances (word uses, rather than word senses) and to evaluate systems on how well they correspond to pre-existing sense inventories or to various sense mapping systems.<br />
* '''Lexical substitution''': find an alternative substitute word or phrase for a target word in context. The task involves both finding the synonyms and disambiguating the context. It allows the use of kind of lexical resource or technique, including word sense disambiguation and word sense induction. A cross-lingual task was also defined.<br />
* '''Evaluation of lexical resources''': the task evaluates the submitted lexical resources indirectly, running a simple WSD based on topic signatures (sets of words related to each target sense). A lexical sample tagged with English WordNet senses was used for evaluation. <br />
* '''Subcategorization acquistion''': semantically similar verbs are similar in terms of subcategorization frames. The task is to use any available method for disambiguating verb senses, so that the results can then be fed into automatic methods used for acquiring subcategorization frames, with the hypothesis that the disambiguation will cluster the instances.<br />
* '''Semantic role labeling''': identifying and labeling constituents of sentences with their semantic roles. The basic task began with attempts to replicate FrameNet data, specifically frame elements. This task has expanded to inferring and developing new frames and frame elements, in individual sentences and in full running texts, with identification of intersentential links and coreference chains.<br />
* '''Semantic relation identification''': examining relations between lexical items in a sentence. The task, given a sample of semantic relation types, is to identify and classify semantic relations between nominals(i.e., nouns and base noun phrases, excluding named entities); a main purpose of this task is to assess different classification methods. Another task is, given a sentence and two tagged nominals, to predict the relation between those nominals and the direction of the relation.<br />
* '''Metonymy resolution''': the figurative substitution of an attribute of a name for the thing specified. The task is a lexical sample task (1) to classify preselected expressions of a particular semantic class (such as country names) as having a literal or a metonymic reading, and if so, (2) to identify a further specification into prespecified metonymic patterns (such as place-for-event or company-for-stock) or, alternatively, recognition as an innovative reading. A second task is to identify when the arguments of a specified predicate does not satisfy selectional restrictions, and if not, to identify both the type mismatch and the type shift (coercion).<br />
* '''Temporal information processing''': the temporal location and order of events in newspaper articles, narratives, and similar texts. The task is to identify the events described in a text and locate these in time, i.e., identification of temporal referring expressions, events and temporal relations within a text. A further task requires systems to recognize which of a fixed set of temporal relations holds between (a) events and time expressions within the same sentence (b) events and the document creation time (c) main events in consecutive sentences, and (d) two events where one syntactically dominates the other.<br />
* '''Coreference resolution''': detection and resolution of coreferences. The task is to detect full coreference chains, composed by named entities, pronouns, and full noun phrases and to resolve pronouns, i.e., finding their antecedents.<br />
* '''Sentiment analysis''': emotion annotation, polarity orientation labeling. The task is to classify the titles of newspaper articles with the appropriate emotion label and/or with a valence indication (positive/negative), given a set of predefined six emotion labels (i.e., Anger, Disgust, Fear, Joy, Sadness, Surprise).<br />
This list is expected to grow as the field progresses.<br />
<br />
Some tasks are closely related to each other. For instance, word sense disambiguation (monolingual, multi-lingual and cross-lingual), word sense induction task, lexical substitution and evaluation of lexical resources are all related to word senses.<br />
<br />
==Organization==<br />
<br />
[http://www.clres.com/siglex.html SIGLEX, the ACL Special Interest Group on the Lexicon] is the umbrella organization for the SemEval semantic evaluations and the SENSEVAL word-sense evaluation exercises. [http://www.senseval.org/ SENSEVAL] is the home page for SENSEVAL 1-3. Each exercise is usually organized by two individuals, who make the call for tasks and handle the overall administration. Within the general guidelines, each task is then organized and run by individuals or groups.<br />
<br />
==SemEval on Wikipedia==<br />
<br />
On [http://en.wikipedia.org/wiki/Main_Page Wikipedia], a [http://en.wikipedia.org/wiki/SemEval SemEval] page had been created and it is calling for contributions and suggestions on how to improve the Wikipedia page and to further the understanding of computational semantics.<br />
<br />
==See also==<br />
<br />
* [[Semantics]]<br />
* [[Computational Semantics]]<br />
* [[Statistical Semantics]]<br />
* [[Semantics software for English]]<br />
<br />
<br />
[[Category:SemEval Portal]]</div>Kenskihttps://aclweb.org/aclwiki/index.php?title=SemEval_Portal&diff=8469SemEval Portal2010-11-27T19:31:18Z<p>Kenski: </p>
<hr />
<div>__FORCETOC__<br />
This page serves as a community portal for everything related to Semantic Evaluation ('''SemEval'''). <br />
==Semantic Evaluation Exercises==<br />
<br />
'''SemEval''' (Semantic Evaluation) is an ongoing series of evaluations of [[Semantics|computational semantic analysis]] systems; it evolved from the Senseval [[Word sense disambiguation|Word sense]] evaluation series. The evaluations are intended to explore the nature of [[Semantics|meaning]] in language. While meaning is intuitive to humans, transferring those intuitions to computational analysis has proved elusive.<br />
<br />
This series of evaluations is providing a mechanism to characterize in more precise terms exactly what is necessary to compute in meaning. As such, the evaluations provide an emergent mechanism to identify the problems and solutions for computations with meaning. These exercises have evolved to articulate more of the dimensions that are involved in our use of language. They began with apparently simple attempts to identify [[Word sense disambiguation|word senses]] computationally. They have evolved to investigate the interrelationships among the elements in a sentence (e.g., [[semantic role labeling]]), relations between sentences (e.g., [[coreference]]), and the nature of what we are saying ([[semantic relations]] and [[sentiment analysis]]).<br />
<br />
The purpose of the SemEval exercises and [http://www.senseval.org SENSEVAL] is to evaluate semantic analysis systems. The first three evaluations, Senseval-1 through Senseval-3, were focused on word sense disambiguation, each time growing in the number of languages offered in the tasks and in the number of participating teams. Beginning with the 4th workshop, SemEval-2007 (SemEval-1), the nature of the tasks evolved to include semantic analysis tasks outside of word sense disambiguation. This portal will be used to provide a comprehensive view of the issues involved in semantic evaluations.<br />
<br />
==Upcoming and Past Events==<br />
<br />
{| border="1" cellpadding="7" cellspacing="0" <br />
|-<br />
! Event<br />
! Year<br />
! Location<br />
! Notes<br />
|-<br />
| [[SemEval 3]]<br />
| align="center" | to be determined<br />
| to be determined<br />
| - discussion at [http://groups.google.com/group/semeval3 SemEval 3 Group]<br />
|-<br />
| [http://semeval2.fbk.eu/semeval2.php SemEval 2]<br />
| align="center" | 2010<br />
| Uppsala, Sweden<br />
| - [http://aclweb.org/anthology-new/S/S10/ proceedings]<br />
|-<br />
| [http://nlp.cs.swarthmore.edu/semeval/index.php SemEval 1]<br />
| align="center" | 2007 <br />
| Prague, Czech Republic<br />
| - [http://aclweb.org/anthology-new/S/S07/ proceedings] <br /> - copy of website at [http://web.archive.org/web/20080727062358/http://nlp.cs.swarthmore.edu/semeval/index.php Internet Archive]<br />
|-<br />
| [http://www.senseval.org/senseval3 SENSEVAL 3]<br />
| align="center" | 2004<br />
| Barcelona, Spain<br />
| - [http://aclweb.org/anthology-new/W/W04/#0800 proceedings]<br />
|-<br />
| [http://www.sle.sharp.co.uk/senseval2 SENSEVAL 2]<br />
| align="center" | 2001<br />
| Toulouse, France<br />
| - main link provides links to results, data, system descriptions, task descriptions, and workshop program <br /> - copy of website at [http://web.archive.org/web/20050507011044/http://www.sle.sharp.co.uk/senseval2/ Internet Archive]<br />
|-<br />
| [http://www.itri.brighton.ac.uk/events/senseval/ARCHIVE/index.html SENSEVAL 1]<br />
| align="center" | 1998<br />
| East Sussex, UK<br />
| - papers in [http://www.springerlink.com/content/0010-4817/34/1-2/ Computers and the Humanities], subscribers or pay per view<br />
|-<br />
|}<br />
<br />
<br />
==Tasks in Semantic Evaluation==<br />
<br />
The major tasks in semantic evaluation include:<br />
* '''[[Word sense disambiguation]]''': WSD, lexical sample and all-words, the process of identifying which [[Word sense disambiguation|sense]] of a word (i.e. [[Semantics|meaning]]) is used in a [[Sentence (linguistics)|sentence]], when the word has multiple meanings ([[polysemy]]). The WSD task has two variants: "[[lexical sample task|lexical sample]]" and "[[all-words task|all words]]" task. The former comprises disambiguating the occurrences of a small sample of target words which were previously selected, while in the latter all the words in a piece of running text need to be disambiguated. Tasks have been performed for many languages. Tasks have covered disambiguation of nouns, verbs, adjectives, and prepositions.<br />
* '''Multi-lingual or cross-lingual word-sense disambiguation''': word senses are defined according to translation distinctions, e.g., a polysemous word in Japanese is translated differently in a given context. The WSD task provides texts with target words and requires identification of the appropriate translation. A related task is cross-language information retrieval, where participants disambiguate in one language (e.g., with WordNet synsets) and retrieve documents in another language; standard information retrieval metrics are use to assess the quality of the disambiguation.<br />
* '''Word-sense induction''': comparison of sense-induction and discrimination systems. The task is to cluster corpus instances (word uses, rather than word senses) and to evaluate systems on how well they correspond to pre-existing sense inventories or to various sense mapping systems.<br />
* '''Lexical substitution''': find an alternative substitute word or phrase for a target word in context. The task involves both finding the synonyms and disambiguating the context. It allows the use of kind of lexical resource or technique, including word sense disambiguation and word sense induction. A cross-lingual task was also defined.<br />
* '''Evaluation of lexical resources''': the task evaluates the submitted lexical resources indirectly, running a simple WSD based on topic signatures (sets of words related to each target sense). A lexical sample tagged with English WordNet senses was used for evaluation. <br />
* '''Subcategorization acquistion''': semantically similar verbs are similar in terms of subcategorization frames. The task is to use any available method for disambiguating verb senses, so that the results can then be fed into automatic methods used for acquiring subcategorization frames, with the hypothesis that the disambiguation will cluster the instances.<br />
* '''Semantic role labeling''': identifying and labeling constituents of sentences with their semantic roles. The basic task began with attempts to replicate FrameNet data, specifically frame elements. This task has expanded to inferring and developing new frames and frame elements, in individual sentences and in full running texts, with identification of intersentential links and coreference chains.<br />
* '''Semantic relation identification''': examining relations between lexical items in a sentence. The task, given a sample of semantic relation types, is to identify and classify semantic relations between nominals(i.e., nouns and base noun phrases, excluding named entities); a main purpose of this task is to assess different classification methods. Another task is, given a sentence and two tagged nominals, to predict the relation between those nominals and the direction of the relation.<br />
* '''Metonymy resolution''': the figurative substitution of an attribute of a name for the thing specified. The task is a lexical sample task (1) to classify preselected expressions of a particular semantic class (such as country names) as having a literal or a metonymic reading, and if so, (2) to identify a further specification into prespecified metonymic patterns (such as place-for-event or company-for-stock) or, alternatively, recognition as an innovative reading. A second task is to identify when the arguments of a specified predicate does not satisfy selectional restrictions, and if not, to identify both the type mismatch and the type shift (coercion).<br />
* '''Temporal information processing''': the temporal location and order of events in newspaper articles, narratives, and similar texts. The task is to identify the events described in a text and locate these in time, i.e., identification of temporal referring expressions, events and temporal relations within a text. A further task requires systems to recognize which of a fixed set of temporal relations holds between (a) events and time expressions within the same sentence (b) events and the document creation time (c) main events in consecutive sentences, and (d) two events where one syntactically dominates the other.<br />
* '''Coreference resolution'''<br />
* '''Sentiment analysis''': emotion annotation, polarity orientation labeling. The task is to classify the titles of newspaper articles with the appropriate emotion label and/or with a valence indication (positive/negative), given a set of predefined six emotion labels (i.e., Anger, Disgust, Fear, Joy, Sadness, Surprise).<br />
This list is expected to grow as the field progresses.<br />
<br />
Some tasks are closely related to each other. For instance, word sense disambiguation (monolingual, multi-lingual and cross-lingual), word sense induction task, lexical substitution and evaluation of lexical resources are all related to word senses.<br />
<br />
==Organization==<br />
<br />
[http://www.clres.com/siglex.html SIGLEX, the ACL Special Interest Group on the Lexicon] is the umbrella organization for the SemEval semantic evaluations and the SENSEVAL word-sense evaluation exercises. [http://www.senseval.org/ SENSEVAL] is the home page for SENSEVAL 1-3. Each exercise is usually organized by two individuals, who make the call for tasks and handle the overall administration. Within the general guidelines, each task is then organized and run by individuals or groups.<br />
<br />
==SemEval on Wikipedia==<br />
<br />
On [http://en.wikipedia.org/wiki/Main_Page Wikipedia], a [http://en.wikipedia.org/wiki/SemEval SemEval] page had been created and it is calling for contributions and suggestions on how to improve the Wikipedia page and to further the understanding of computational semantics.<br />
<br />
==See also==<br />
<br />
* [[Semantics]]<br />
* [[Computational Semantics]]<br />
* [[Statistical Semantics]]<br />
* [[Semantics software for English]]<br />
<br />
<br />
[[Category:SemEval Portal]]</div>Kenskihttps://aclweb.org/aclwiki/index.php?title=SemEval_Portal&diff=8468SemEval Portal2010-11-27T19:19:45Z<p>Kenski: </p>
<hr />
<div>__FORCETOC__<br />
This page serves as a community portal for everything related to Semantic Evaluation ('''SemEval'''). <br />
==Semantic Evaluation Exercises==<br />
<br />
'''SemEval''' (Semantic Evaluation) is an ongoing series of evaluations of [[Semantics|computational semantic analysis]] systems; it evolved from the Senseval [[Word sense disambiguation|Word sense]] evaluation series. The evaluations are intended to explore the nature of [[Semantics|meaning]] in language. While meaning is intuitive to humans, transferring those intuitions to computational analysis has proved elusive.<br />
<br />
This series of evaluations is providing a mechanism to characterize in more precise terms exactly what is necessary to compute in meaning. As such, the evaluations provide an emergent mechanism to identify the problems and solutions for computations with meaning. These exercises have evolved to articulate more of the dimensions that are involved in our use of language. They began with apparently simple attempts to identify [[Word sense disambiguation|word senses]] computationally. They have evolved to investigate the interrelationships among the elements in a sentence (e.g., [[semantic role labeling]]), relations between sentences (e.g., [[coreference]]), and the nature of what we are saying ([[semantic relations]] and [[sentiment analysis]]).<br />
<br />
The purpose of the SemEval exercises and [http://www.senseval.org SENSEVAL] is to evaluate semantic analysis systems. The first three evaluations, Senseval-1 through Senseval-3, were focused on word sense disambiguation, each time growing in the number of languages offered in the tasks and in the number of participating teams. Beginning with the 4th workshop, SemEval-2007 (SemEval-1), the nature of the tasks evolved to include semantic analysis tasks outside of word sense disambiguation. This portal will be used to provide a comprehensive view of the issues involved in semantic evaluations.<br />
<br />
==Upcoming and Past Events==<br />
<br />
{| border="1" cellpadding="7" cellspacing="0" <br />
|-<br />
! Event<br />
! Year<br />
! Location<br />
! Notes<br />
|-<br />
| [[SemEval 3]]<br />
| align="center" | to be determined<br />
| to be determined<br />
| - discussion at [http://groups.google.com/group/semeval3 SemEval 3 Group]<br />
|-<br />
| [http://semeval2.fbk.eu/semeval2.php SemEval 2]<br />
| align="center" | 2010<br />
| Uppsala, Sweden<br />
| - [http://aclweb.org/anthology-new/S/S10/ proceedings]<br />
|-<br />
| [http://nlp.cs.swarthmore.edu/semeval/index.php SemEval 1]<br />
| align="center" | 2007 <br />
| Prague, Czech Republic<br />
| - [http://aclweb.org/anthology-new/S/S07/ proceedings] <br /> - copy of website at [http://web.archive.org/web/20080727062358/http://nlp.cs.swarthmore.edu/semeval/index.php Internet Archive]<br />
|-<br />
| [http://www.senseval.org/senseval3 SENSEVAL 3]<br />
| align="center" | 2004<br />
| Barcelona, Spain<br />
| - [http://aclweb.org/anthology-new/W/W04/#0800 proceedings]<br />
|-<br />
| [http://www.sle.sharp.co.uk/senseval2 SENSEVAL 2]<br />
| align="center" | 2001<br />
| Toulouse, France<br />
| - main link provides links to results, data, system descriptions, task descriptions, and workshop program <br /> - copy of website at [http://web.archive.org/web/20050507011044/http://www.sle.sharp.co.uk/senseval2/ Internet Archive]<br />
|-<br />
| [http://www.itri.brighton.ac.uk/events/senseval/ARCHIVE/index.html SENSEVAL 1]<br />
| align="center" | 1998<br />
| East Sussex, UK<br />
| - papers in [http://www.springerlink.com/content/0010-4817/34/1-2/ Computers and the Humanities], subscribers or pay per view<br />
|-<br />
|}<br />
<br />
<br />
==Tasks in Semantic Evaluation==<br />
<br />
The major tasks in semantic evaluation include:<br />
* '''[[Word sense disambiguation]]''': WSD, lexical sample and all-words, the process of identifying which [[Word sense disambiguation|sense]] of a word (i.e. [[Semantics|meaning]]) is used in a [[Sentence (linguistics)|sentence]], when the word has multiple meanings ([[polysemy]]). The WSD task has two variants: "[[lexical sample task|lexical sample]]" and "[[all-words task|all words]]" task. The former comprises disambiguating the occurrences of a small sample of target words which were previously selected, while in the latter all the words in a piece of running text need to be disambiguated. Tasks have been performed for many languages. Tasks have covered disambiguation of nouns, verbs, adjectives, and prepositions.<br />
* '''Multi-lingual or cross-lingual word-sense disambiguation''': word senses are defined according to translation distinctions, e.g., a polysemous word in Japanese is translated differently in a given context. The WSD task provides texts with target words and requires identification of the appropriate translation. A related task is cross-language information retrieval, where participants disambiguate in one language (e.g., with WordNet synsets) and retrieve documents in another language; standard information retrieval metrics are use to assess the quality of the disambiguation.<br />
* '''Word-sense induction''': comparison of sense-induction and discrimination systems. The task is to cluster corpus instances (word uses, rather than word senses) and to evaluate systems on how well they correspond to pre-existing sense inventories or to various sense mapping systems.<br />
* '''Lexical substitution''': find an alternative substitute word or phrase for a target word in context. The task involves both finding the synonyms and disambiguating the context. It allows the use of kind of lexical resource or technique, including word sense disambiguation and word sense induction. A cross-lingual task was also defined.<br />
* '''Evaluation of lexical resources''': the task evaluates the submitted lexical resources indirectly, running a simple WSD based on topic signatures (sets of words related to each target sense). A lexical sample tagged with English WordNet senses was used for evaluation. <br />
* '''Subcategorization acquistion''': semantically similar verbs are similar in terms of subcategorization frames. The task is to use any available method for disambiguating verb senses, so that the results can then be fed into automatic methods used for acquiring subcategorization frames, with the hypothesis that the disambiguation will cluster the instances.<br />
* '''Semantic role labeling''': identifying and labeling constituents of sentences with their semantic roles. The basic task began with attempts to replicate FrameNet data, specifically frame elements. This task has expanded to inferring and developing new frames and frame elements, in individual sentences and in full running texts, with identification of intersentential links and coreference chains.<br />
* '''Semantic relation identification''': examining relations between lexical items in a sentence. The task, given a sample of semantic relation types, is to identify and classify semantic relations between nominals(i.e., nouns and base noun phrases, excluding named entities); a main purpose of this task is to assess different classification methods. Another task is, given a sentence and two tagged nominals, to predict the relation between those nominals and the direction of the relation.<br />
* '''Metonymy resolution''': the figurative substitution of an attribute of a name for the thing specified. The task is a lexical sample task (1) to classify preselected expressions of a particular semantic class (such as country names) as having a literal or a metonymic reading, and if so, (2) to identify a further specification into prespecified metonymic patterns (such as place-for-event or company-for-stock) or, alternatively, recognition as an innovative reading. A second task is to identify when the arguments of a specified predicate does not satisfy selectional restrictions, and if not, to identify both the type mismatch and the type shift (coercion).<br />
* '''Temporal information processing''': the temporal location and order of events in newspaper articles, narratives, and similar texts. The task is to identify the events described in a text and locate these in time, i.e., identification of temporal referring expressions, events and temporal relations within a text. A further task requires systems to recognize which of a fixed set of temporal relations holds between (a) events and time expressions within the same sentence (b) events and the document creation time (c) main events in consecutive sentences, and (d) two events where one syntactically dominates the other.<br />
* '''Coreference resolution'''<br />
* '''Sentiment analysis'''<br />
This list is expected to grow as the field progresses.<br />
<br />
Some tasks are closely related to each other. For instance, word sense disambiguation (monolingual, multi-lingual and cross-lingual), word sense induction task, lexical substitution and evaluation of lexical resources are all related to word senses.<br />
<br />
==Organization==<br />
<br />
[http://www.clres.com/siglex.html SIGLEX, the ACL Special Interest Group on the Lexicon] is the umbrella organization for the SemEval semantic evaluations and the SENSEVAL word-sense evaluation exercises. [http://www.senseval.org/ SENSEVAL] is the home page for SENSEVAL 1-3. Each exercise is usually organized by two individuals, who make the call for tasks and handle the overall administration. Within the general guidelines, each task is then organized and run by individuals or groups.<br />
<br />
==SemEval on Wikipedia==<br />
<br />
On [http://en.wikipedia.org/wiki/Main_Page Wikipedia], a [http://en.wikipedia.org/wiki/SemEval SemEval] page had been created and it is calling for contributions and suggestions on how to improve the Wikipedia page and to further the understanding of computational semantics.<br />
<br />
==See also==<br />
<br />
* [[Semantics]]<br />
* [[Computational Semantics]]<br />
* [[Statistical Semantics]]<br />
* [[Semantics software for English]]<br />
<br />
<br />
[[Category:SemEval Portal]]</div>Kenskihttps://aclweb.org/aclwiki/index.php?title=SemEval_Portal&diff=8467SemEval Portal2010-11-27T18:14:14Z<p>Kenski: </p>
<hr />
<div>__FORCETOC__<br />
This page serves as a community portal for everything related to Semantic Evaluation ('''SemEval'''). <br />
==Semantic Evaluation Exercises==<br />
<br />
'''SemEval''' (Semantic Evaluation) is an ongoing series of evaluations of [[Semantics|computational semantic analysis]] systems; it evolved from the Senseval [[Word sense disambiguation|Word sense]] evaluation series. The evaluations are intended to explore the nature of [[Semantics|meaning]] in language. While meaning is intuitive to humans, transferring those intuitions to computational analysis has proved elusive.<br />
<br />
This series of evaluations is providing a mechanism to characterize in more precise terms exactly what is necessary to compute in meaning. As such, the evaluations provide an emergent mechanism to identify the problems and solutions for computations with meaning. These exercises have evolved to articulate more of the dimensions that are involved in our use of language. They began with apparently simple attempts to identify [[Word sense disambiguation|word senses]] computationally. They have evolved to investigate the interrelationships among the elements in a sentence (e.g., [[semantic role labeling]]), relations between sentences (e.g., [[coreference]]), and the nature of what we are saying ([[semantic relations]] and [[sentiment analysis]]).<br />
<br />
The purpose of the SemEval exercises and [http://www.senseval.org SENSEVAL] is to evaluate semantic analysis systems. The first three evaluations, Senseval-1 through Senseval-3, were focused on word sense disambiguation, each time growing in the number of languages offered in the tasks and in the number of participating teams. Beginning with the 4th workshop, SemEval-2007 (SemEval-1), the nature of the tasks evolved to include semantic analysis tasks outside of word sense disambiguation. This portal will be used to provide a comprehensive view of the issues involved in semantic evaluations.<br />
<br />
==Upcoming and Past Events==<br />
<br />
{| border="1" cellpadding="7" cellspacing="0" <br />
|-<br />
! Event<br />
! Year<br />
! Location<br />
! Notes<br />
|-<br />
| [[SemEval 3]]<br />
| align="center" | to be determined<br />
| to be determined<br />
| - discussion at [http://groups.google.com/group/semeval3 SemEval 3 Group]<br />
|-<br />
| [http://semeval2.fbk.eu/semeval2.php SemEval 2]<br />
| align="center" | 2010<br />
| Uppsala, Sweden<br />
| - [http://aclweb.org/anthology-new/S/S10/ proceedings]<br />
|-<br />
| [http://nlp.cs.swarthmore.edu/semeval/index.php SemEval 1]<br />
| align="center" | 2007 <br />
| Prague, Czech Republic<br />
| - [http://aclweb.org/anthology-new/S/S07/ proceedings] <br /> - copy of website at [http://web.archive.org/web/20080727062358/http://nlp.cs.swarthmore.edu/semeval/index.php Internet Archive]<br />
|-<br />
| [http://www.senseval.org/senseval3 SENSEVAL 3]<br />
| align="center" | 2004<br />
| Barcelona, Spain<br />
| - [http://aclweb.org/anthology-new/W/W04/#0800 proceedings]<br />
|-<br />
| [http://www.sle.sharp.co.uk/senseval2 SENSEVAL 2]<br />
| align="center" | 2001<br />
| Toulouse, France<br />
| - main link provides links to results, data, system descriptions, task descriptions, and workshop program <br /> - copy of website at [http://web.archive.org/web/20050507011044/http://www.sle.sharp.co.uk/senseval2/ Internet Archive]<br />
|-<br />
| [http://www.itri.brighton.ac.uk/events/senseval/ARCHIVE/index.html SENSEVAL 1]<br />
| align="center" | 1998<br />
| East Sussex, UK<br />
| - papers in [http://www.springerlink.com/content/0010-4817/34/1-2/ Computers and the Humanities], subscribers or pay per view<br />
|-<br />
|}<br />
<br />
<br />
==Tasks in Semantic Evaluation==<br />
<br />
The major tasks in semantic evaluation include:<br />
* '''[[Word sense disambiguation]]''': WSD, lexical sample and all-words, the process of identifying which [[Word sense disambiguation|sense]] of a word (i.e. [[Semantics|meaning]]) is used in a [[Sentence (linguistics)|sentence]], when the word has multiple meanings ([[polysemy]]). The WSD task has two variants: "[[lexical sample task|lexical sample]]" and "[[all-words task|all words]]" task. The former comprises disambiguating the occurrences of a small sample of target words which were previously selected, while in the latter all the words in a piece of running text need to be disambiguated. Tasks have been performed for many languages. Tasks have covered disambiguation of nouns, verbs, adjectives, and prepositions.<br />
* '''Multi-lingual or cross-lingual word-sense disambiguation''': word senses are defined according to translation distinctions, e.g., a polysemous word in Japanese is translated differently in a given context. The WSD task provides texts with target words and requires identification of the appropriate translation. A related task is cross-language information retrieval, where participants disambiguate in one language (e.g., with WordNet synsets) and retrieve documents in another language; standard information retrieval metrics are use to assess the quality of the disambiguation.<br />
* '''Word-sense induction''': comparison of sense-induction and discrimination systems. The task is to cluster corpus instances (word uses, rather than word senses) and to evaluate systems on how well they correspond to pre-existing sense inventories or to various sense mapping systems.<br />
* '''Lexical substitution''': find an alternative substitute word or phrase for a target word in context. The task involves both finding the synonyms and disambiguating the context. It allows the use of kind of lexical resource or technique, including word sense disambiguation and word sense induction. A cross-lingual task was also defined.<br />
* '''Evaluation of lexical resources''': the task evaluates the submitted lexical resources indirectly, running a simple WSD based on topic signatures (sets of words related to each target sense). A lexical sample tagged with English WordNet senses was used for evaluation. <br />
* '''Subcategorization acquistion''': semantically similar verbs are similar in terms of subcategorization frames. The task is to use any available method for disambiguating verb senses, so that the results can then be fed into automatic methods used for acquiring subcategorization frames, with the hypothesis that the disambiguation will cluster the instances.<br />
* '''Semantic role labeling''': identifying and labeling constituents of sentences with their semantic roles. The basic task began with attempts to replicate FrameNet data, specifically frame elements. This task has expanded to inferring and developing new frames and frame elements, in individual sentences and in full running texts, with identification of intersentential links and coreference chains.<br />
* '''Semantic relation identification'''<br />
* '''Metonymy resolution''': the figurative substitution of an attribute of a name for the thing specified. The task is a lexical sample task (1) to classify preselected expressions of a particular semantic class (such as country names) as having a literal or a metonymic reading, and if so, (2) to identify a further specification into prespecified metonymic patterns (such as place-for-event or company-for-stock) or, alternatively, recognition as an innovative reading. A second task is to identify when the arguments of a specified predicate does not satisfy selectional restrictions, and if not, to identify both the type mismatch and the type shift (coercion).<br />
* '''Temporal information processing''': the temporal location and order of events in newspaper articles, narratives, and similar texts. The task is to identify the events described in a text and locate these in time, i.e., identification of temporal referring expressions, events and temporal relations within a text. A further task requires systems to recognize which of a fixed set of temporal relations holds between (a) events and time expressions within the same sentence (b) events and the document creation time (c) main events in consecutive sentences, and (d) two events where one syntactically dominates the other.<br />
* '''Coreference resolution'''<br />
* '''Sentiment analysis'''<br />
This list is expected to grow as the field progresses.<br />
<br />
Some tasks are closely related to each other. For instance, word sense disambiguation (monolingual, multi-lingual and cross-lingual), word sense induction task, lexical substitution and evaluation of lexical resources are all related to word senses.<br />
<br />
==Organization==<br />
<br />
[http://www.clres.com/siglex.html SIGLEX, the ACL Special Interest Group on the Lexicon] is the umbrella organization for the SemEval semantic evaluations and the SENSEVAL word-sense evaluation exercises. [http://www.senseval.org/ SENSEVAL] is the home page for SENSEVAL 1-3. Each exercise is usually organized by two individuals, who make the call for tasks and handle the overall administration. Within the general guidelines, each task is then organized and run by individuals or groups.<br />
<br />
==SemEval on Wikipedia==<br />
<br />
On [http://en.wikipedia.org/wiki/Main_Page Wikipedia], a [http://en.wikipedia.org/wiki/SemEval SemEval] page had been created and it is calling for contributions and suggestions on how to improve the Wikipedia page and to further the understanding of computational semantics.<br />
<br />
==See also==<br />
<br />
* [[Semantics]]<br />
* [[Computational Semantics]]<br />
* [[Statistical Semantics]]<br />
* [[Semantics software for English]]<br />
<br />
<br />
[[Category:SemEval Portal]]</div>Kenskihttps://aclweb.org/aclwiki/index.php?title=SemEval_Portal&diff=8466SemEval Portal2010-11-27T18:12:13Z<p>Kenski: </p>
<hr />
<div>__FORCETOC__<br />
This page serves as a community portal for everything related to Semantic Evaluation ('''SemEval'''). <br />
==Semantic Evaluation Exercises==<br />
<br />
'''SemEval''' (Semantic Evaluation) is an ongoing series of evaluations of [[Semantics|computational semantic analysis]] systems; it evolved from the Senseval [[Word sense disambiguation|Word sense]] evaluation series. The evaluations are intended to explore the nature of [[Semantics|meaning]] in language. While meaning is intuitive to humans, transferring those intuitions to computational analysis has proved elusive.<br />
<br />
This series of evaluations is providing a mechanism to characterize in more precise terms exactly what is necessary to compute in meaning. As such, the evaluations provide an emergent mechanism to identify the problems and solutions for computations with meaning. These exercises have evolved to articulate more of the dimensions that are involved in our use of language. They began with apparently simple attempts to identify [[Word sense disambiguation|word senses]] computationally. They have evolved to investigate the interrelationships among the elements in a sentence (e.g., [[semantic role labeling]]), relations between sentences (e.g., [[coreference]]), and the nature of what we are saying ([[semantic relations]] and [[sentiment analysis]]).<br />
<br />
The purpose of the SemEval exercises and [http://www.senseval.org SENSEVAL] is to evaluate semantic analysis systems. The first three evaluations, Senseval-1 through Senseval-3, were focused on word sense disambiguation, each time growing in the number of languages offered in the tasks and in the number of participating teams. Beginning with the 4th workshop, SemEval-2007 (SemEval-1), the nature of the tasks evolved to include semantic analysis tasks outside of word sense disambiguation. This portal will be used to provide a comprehensive view of the issues involved in semantic evaluations.<br />
<br />
==Upcoming and Past Events==<br />
<br />
{| border="1" cellpadding="7" cellspacing="0" <br />
|-<br />
! Event<br />
! Year<br />
! Location<br />
! Notes<br />
|-<br />
| [[SemEval 3]]<br />
| align="center" | to be determined<br />
| to be determined<br />
| - discussion at [http://groups.google.com/group/semeval3 SemEval 3 Group]<br />
|-<br />
| [http://semeval2.fbk.eu/semeval2.php SemEval 2]<br />
| align="center" | 2010<br />
| Uppsala, Sweden<br />
| - [http://aclweb.org/anthology-new/S/S10/ proceedings]<br />
|-<br />
| [http://nlp.cs.swarthmore.edu/semeval/index.php SemEval 1]<br />
| align="center" | 2007 <br />
| Prague, Czech Republic<br />
| - [http://aclweb.org/anthology-new/S/S07/ proceedings] <br /> - copy of website at [http://web.archive.org/web/20080727062358/http://nlp.cs.swarthmore.edu/semeval/index.php Internet Archive]<br />
|-<br />
| [http://www.senseval.org/senseval3 SENSEVAL 3]<br />
| align="center" | 2004<br />
| Barcelona, Spain<br />
| - [http://aclweb.org/anthology-new/W/W04/#0800 proceedings]<br />
|-<br />
| [http://www.sle.sharp.co.uk/senseval2 SENSEVAL 2]<br />
| align="center" | 2001<br />
| Toulouse, France<br />
| - main link provides links to results, data, system descriptions, task descriptions, and workshop program <br /> - copy of website at [http://web.archive.org/web/20050507011044/http://www.sle.sharp.co.uk/senseval2/ Internet Archive]<br />
|-<br />
| [http://www.itri.brighton.ac.uk/events/senseval/ARCHIVE/index.html SENSEVAL 1]<br />
| align="center" | 1998<br />
| East Sussex, UK<br />
| - papers in [http://www.springerlink.com/content/0010-4817/34/1-2/ Computers and the Humanities], subscribers or pay per view<br />
|-<br />
|}<br />
<br />
<br />
==Tasks in Semantic Evaluation==<br />
<br />
The major tasks in semantic evaluation include:<br />
* '''[[Word sense disambiguation]]''': WSD, lexical sample and all-words, the process of identifying which [[Word sense disambiguation|sense]] of a word (i.e. [[Semantics|meaning]]) is used in a [[Sentence (linguistics)|sentence]], when the word has multiple meanings ([[polysemy]]). The WSD task has two variants: "[[lexical sample task|lexical sample]]" and "[[all-words task|all words]]" task. The former comprises disambiguating the occurrences of a small sample of target words which were previously selected, while in the latter all the words in a piece of running text need to be disambiguated. Tasks have been performed for many languages. Tasks have covered disambiguation of nouns, verbs, adjectives, and prepositions.<br />
* '''Multi-lingual or cross-lingual word-sense disambiguation''': word senses are defined according to translation distinctions, e.g., a polysemous word in Japanese is translated differently in a given context. The WSD task provides texts with target words and requires identification of the appropriate translation. A related task is cross-language information retrieval, where participants disambiguate in one language (e.g., with WordNet synsets) and retrieve documents in another language; standard information retrieval metrics are use to assess the quality of the disambiguation.<br />
* '''Word-sense induction''': comparison of sense-induction and discrimination systems. The task is to cluster corpus instances (word uses, rather than word senses) and to evaluate systems on how well they correspond to pre-existing sense inventories or to various sense mapping systems.<br />
* '''Lexical substitution''': find an alternative substitute word or phrase for a target word in context. The task involves both finding the synonyms and disambiguating the context. It allows the use of kind of lexical resource or technique, including word sense disambiguation and word sense induction. A cross-lingual task was also defined.<br />
* '''Evaluation of lexical resources''': the task evaluates the submitted lexical resources indirectly, running a simple WSD based on topic signatures (sets of words related to each target sense). A lexical sample tagged with English WordNet senses was used for evaluation. <br />
* '''Subcategorization acquistion''': semantically similar verbs are similar in terms of subcategorization frames. The task is to use any available method for disambiguating verb senses, so that the results can then be fed into automatic methods used for acquiring subcategorization frames, with the hypothesis that the disambiguation will cluster the instances.<br />
* '''Semantic role labeling''': identifying and labeling constituents of sentences with their semantic roles. The basic task began with attempts to replicate FrameNet data, specifically frame elements. This task has expanded to inferring and developing new frames and frame elements, in individual sentences and in full running texts, with identification of intersentential links and coreference chains.<br />
* '''Semantic relation identification'''<br />
* '''Metonymy resolution''': the figurative substitution of an attribute of a name for the thing specified. The task is a lexical sample task (1) to classify preselected expressions of a particular semantic class (such as country names) as having a literal or a metonymic reading, and if so, (2) to identify a further specification into prespecified metonymic patterns (such as place-for-event or company-for-stock) or, alternatively, recognition as an innovative reading. A second task is to identify when the arguments of a specified predicate does not satisfy selectional restrictions, and if not, to identify both the type mismatch and the type shift (coercion).<br />
* '''Temporal information processing''': the temporal location and order of events in newspaper articles, narratives, and similar texts. The task is to identify the events described in a text and locate these in time, i.e., identification of temporal referring expressions, events and temporal relations within a text.<br />
* '''Coreference resolution'''<br />
* '''Sentiment analysis'''<br />
This list is expected to grow as the field progresses.<br />
<br />
Some tasks are closely related to each other. For instance, word sense disambiguation (monolingual, multi-lingual and cross-lingual), word sense induction task, lexical substitution and evaluation of lexical resources are all related to word senses.<br />
<br />
==Organization==<br />
<br />
[http://www.clres.com/siglex.html SIGLEX, the ACL Special Interest Group on the Lexicon] is the umbrella organization for the SemEval semantic evaluations and the SENSEVAL word-sense evaluation exercises. [http://www.senseval.org/ SENSEVAL] is the home page for SENSEVAL 1-3. Each exercise is usually organized by two individuals, who make the call for tasks and handle the overall administration. Within the general guidelines, each task is then organized and run by individuals or groups.<br />
<br />
==SemEval on Wikipedia==<br />
<br />
On [http://en.wikipedia.org/wiki/Main_Page Wikipedia], a [http://en.wikipedia.org/wiki/SemEval SemEval] page had been created and it is calling for contributions and suggestions on how to improve the Wikipedia page and to further the understanding of computational semantics.<br />
<br />
==See also==<br />
<br />
* [[Semantics]]<br />
* [[Computational Semantics]]<br />
* [[Statistical Semantics]]<br />
* [[Semantics software for English]]<br />
<br />
<br />
[[Category:SemEval Portal]]</div>Kenskihttps://aclweb.org/aclwiki/index.php?title=SemEval_Portal&diff=8465SemEval Portal2010-11-27T18:04:22Z<p>Kenski: </p>
<hr />
<div>__FORCETOC__<br />
This page serves as a community portal for everything related to Semantic Evaluation ('''SemEval'''). <br />
==Semantic Evaluation Exercises==<br />
<br />
'''SemEval''' (Semantic Evaluation) is an ongoing series of evaluations of [[Semantics|computational semantic analysis]] systems; it evolved from the Senseval [[Word sense disambiguation|Word sense]] evaluation series. The evaluations are intended to explore the nature of [[Semantics|meaning]] in language. While meaning is intuitive to humans, transferring those intuitions to computational analysis has proved elusive.<br />
<br />
This series of evaluations is providing a mechanism to characterize in more precise terms exactly what is necessary to compute in meaning. As such, the evaluations provide an emergent mechanism to identify the problems and solutions for computations with meaning. These exercises have evolved to articulate more of the dimensions that are involved in our use of language. They began with apparently simple attempts to identify [[Word sense disambiguation|word senses]] computationally. They have evolved to investigate the interrelationships among the elements in a sentence (e.g., [[semantic role labeling]]), relations between sentences (e.g., [[coreference]]), and the nature of what we are saying ([[semantic relations]] and [[sentiment analysis]]).<br />
<br />
The purpose of the SemEval exercises and [http://www.senseval.org SENSEVAL] is to evaluate semantic analysis systems. The first three evaluations, Senseval-1 through Senseval-3, were focused on word sense disambiguation, each time growing in the number of languages offered in the tasks and in the number of participating teams. Beginning with the 4th workshop, SemEval-2007 (SemEval-1), the nature of the tasks evolved to include semantic analysis tasks outside of word sense disambiguation. This portal will be used to provide a comprehensive view of the issues involved in semantic evaluations.<br />
<br />
==Upcoming and Past Events==<br />
<br />
{| border="1" cellpadding="7" cellspacing="0" <br />
|-<br />
! Event<br />
! Year<br />
! Location<br />
! Notes<br />
|-<br />
| [[SemEval 3]]<br />
| align="center" | to be determined<br />
| to be determined<br />
| - discussion at [http://groups.google.com/group/semeval3 SemEval 3 Group]<br />
|-<br />
| [http://semeval2.fbk.eu/semeval2.php SemEval 2]<br />
| align="center" | 2010<br />
| Uppsala, Sweden<br />
| - [http://aclweb.org/anthology-new/S/S10/ proceedings]<br />
|-<br />
| [http://nlp.cs.swarthmore.edu/semeval/index.php SemEval 1]<br />
| align="center" | 2007 <br />
| Prague, Czech Republic<br />
| - [http://aclweb.org/anthology-new/S/S07/ proceedings] <br /> - copy of website at [http://web.archive.org/web/20080727062358/http://nlp.cs.swarthmore.edu/semeval/index.php Internet Archive]<br />
|-<br />
| [http://www.senseval.org/senseval3 SENSEVAL 3]<br />
| align="center" | 2004<br />
| Barcelona, Spain<br />
| - [http://aclweb.org/anthology-new/W/W04/#0800 proceedings]<br />
|-<br />
| [http://www.sle.sharp.co.uk/senseval2 SENSEVAL 2]<br />
| align="center" | 2001<br />
| Toulouse, France<br />
| - main link provides links to results, data, system descriptions, task descriptions, and workshop program <br /> - copy of website at [http://web.archive.org/web/20050507011044/http://www.sle.sharp.co.uk/senseval2/ Internet Archive]<br />
|-<br />
| [http://www.itri.brighton.ac.uk/events/senseval/ARCHIVE/index.html SENSEVAL 1]<br />
| align="center" | 1998<br />
| East Sussex, UK<br />
| - papers in [http://www.springerlink.com/content/0010-4817/34/1-2/ Computers and the Humanities], subscribers or pay per view<br />
|-<br />
|}<br />
<br />
<br />
==Tasks in Semantic Evaluation==<br />
<br />
The major tasks in semantic evaluation include:<br />
* '''[[Word sense disambiguation]]''': WSD, lexical sample and all-words, the process of identifying which [[Word sense disambiguation|sense]] of a word (i.e. [[Semantics|meaning]]) is used in a [[Sentence (linguistics)|sentence]], when the word has multiple meanings ([[polysemy]]). The WSD task has two variants: "[[lexical sample task|lexical sample]]" and "[[all-words task|all words]]" task. The former comprises disambiguating the occurrences of a small sample of target words which were previously selected, while in the latter all the words in a piece of running text need to be disambiguated. Tasks have been performed for many languages. Tasks have covered disambiguation of nouns, verbs, adjectives, and prepositions.<br />
* '''Multi-lingual or cross-lingual word-sense disambiguation''': word senses are defined according to translation distinctions, e.g., a polysemous word in Japanese is translated differently in a given context. The WSD task provides texts with target words and requires identification of the appropriate translation. A related task is cross-language information retrieval, where participants disambiguate in one language (e.g., with WordNet synsets) and retrieve documents in another language; standard information retrieval metrics are use to assess the quality of the disambiguation.<br />
* '''Word-sense induction''': comparison of sense-induction and discrimination systems. The task is to cluster corpus instances (word uses, rather than word senses) and to evaluate systems on how well they correspond to pre-existing sense inventories or to various sense mapping systems.<br />
* '''Lexical substitution''': find an alternative substitute word or phrase for a target word in context. The task involves both finding the synonyms and disambiguating the context. It allows the use of kind of lexical resource or technique, including word sense disambiguation and word sense induction. A cross-lingual task was also defined.<br />
* '''Evaluation of lexical resources''': the task evaluates the submitted lexical resources indirectly, running a simple WSD based on topic signatures (sets of words related to each target sense). A lexical sample tagged with English WordNet senses was used for evaluation. <br />
* '''Subcategorization acquistion''': semantically similar verbs are similar in terms of subcategorization frames. The task is to use any available method for disambiguating verb senses, so that the results can then be fed into automatic methods used for acquiring subcategorization frames, with the hypothesis that the disambiguation will cluster the instances.<br />
* '''Semantic role labeling''': identifying and labeling constituents of sentences with their semantic roles. The basic task began with attempts to replicate FrameNet data, specifically frame elements. This task has expanded to inferring and developing new frames and frame elements, in individual sentences and in full running texts, with identification of intersentential links and coreference chains.<br />
* '''Semantic relation identification'''<br />
* '''Metonymy resolution'''<br />
* '''Temporal information processing''': the temporal location and order of events in newspaper articles, narratives, and similar texts. The task is to identify the events described in a text and locate these in time, i.e., identification of temporal referring expressions, events and temporal relations within a text.<br />
* '''Coreference resolution'''<br />
* '''Sentiment analysis'''<br />
This list is expected to grow as the field progresses.<br />
<br />
Some tasks are closely related to each other. For instance, word sense disambiguation (monolingual, multi-lingual and cross-lingual), word sense induction task, lexical substitution and evaluation of lexical resources are all related to word senses.<br />
<br />
==Organization==<br />
<br />
[http://www.clres.com/siglex.html SIGLEX, the ACL Special Interest Group on the Lexicon] is the umbrella organization for the SemEval semantic evaluations and the SENSEVAL word-sense evaluation exercises. [http://www.senseval.org/ SENSEVAL] is the home page for SENSEVAL 1-3. Each exercise is usually organized by two individuals, who make the call for tasks and handle the overall administration. Within the general guidelines, each task is then organized and run by individuals or groups.<br />
<br />
==SemEval on Wikipedia==<br />
<br />
On [http://en.wikipedia.org/wiki/Main_Page Wikipedia], a [http://en.wikipedia.org/wiki/SemEval SemEval] page had been created and it is calling for contributions and suggestions on how to improve the Wikipedia page and to further the understanding of computational semantics.<br />
<br />
==See also==<br />
<br />
* [[Semantics]]<br />
* [[Computational Semantics]]<br />
* [[Statistical Semantics]]<br />
* [[Semantics software for English]]<br />
<br />
<br />
[[Category:SemEval Portal]]</div>Kenskihttps://aclweb.org/aclwiki/index.php?title=SemEval_Portal&diff=8448SemEval Portal2010-11-21T22:26:59Z<p>Kenski: </p>
<hr />
<div>__FORCETOC__<br />
This page serves as a community portal for everything related to Semantic Evaluation ('''SemEval'''). <br />
==Semantic Evaluation Exercises==<br />
<br />
'''SemEval''' (Semantic Evaluation) is an ongoing series of evaluations of [[Semantics|computational semantic analysis]] systems; it evolved from the Senseval [[Word sense disambiguation|Word sense]] evaluation series. The evaluations are intended to explore the nature of [[Semantics|meaning]] in language. While meaning is intuitive to humans, transferring those intuitions to computational analysis has proved elusive.<br />
<br />
This series of evaluations is providing a mechanism to characterize in more precise terms exactly what is necessary to compute in meaning. As such, the evaluations provide an emergent mechanism to identify the problems and solutions for computations with meaning. These exercises have evolved to articulate more of the dimensions that are involved in our use of language. They began with apparently simple attempts to identify [[Word sense disambiguation|word senses]] computationally. They have evolved to investigate the interrelationships among the elements in a sentence (e.g., [[semantic role labeling]]), relations between sentences (e.g., [[coreference]]), and the nature of what we are saying ([[semantic relations]] and [[sentiment analysis]]).<br />
<br />
The purpose of the SemEval exercises and [http://www.senseval.org SENSEVAL] is to evaluate semantic analysis systems. The first three evaluations, Senseval-1 through Senseval-3, were focused on word sense disambiguation, each time growing in the number of languages offered in the tasks and in the number of participating teams. Beginning with the 4th workshop, SemEval-2007 (SemEval-1), the nature of the tasks evolved to include semantic analysis tasks outside of word sense disambiguation. This portal will be used to provide a comprehensive view of the issues involved in semantic evaluations.<br />
<br />
==Upcoming and Past Events==<br />
<br />
{| border="1" cellpadding="7" cellspacing="0" <br />
|-<br />
! Event<br />
! Year<br />
! Location<br />
! Notes<br />
|-<br />
| [[SemEval 3]]<br />
| align="center" | to be determined<br />
| to be determined<br />
| - discussion at [http://groups.google.com/group/semeval3 SemEval 3 Group]<br />
|-<br />
| [http://semeval2.fbk.eu/semeval2.php SemEval 2]<br />
| align="center" | 2010<br />
| Uppsala, Sweden<br />
| - [http://aclweb.org/anthology-new/S/S10/ proceedings]<br />
|-<br />
| [http://nlp.cs.swarthmore.edu/semeval/index.php SemEval 1]<br />
| align="center" | 2007 <br />
| Prague, Czech Republic<br />
| - [http://aclweb.org/anthology-new/S/S07/ proceedings] <br /> - copy of website at [http://web.archive.org/web/20080727062358/http://nlp.cs.swarthmore.edu/semeval/index.php Internet Archive]<br />
|-<br />
| [http://www.senseval.org/senseval3 SENSEVAL 3]<br />
| align="center" | 2004<br />
| Barcelona, Spain<br />
| - [http://aclweb.org/anthology-new/W/W04/#0800 proceedings]<br />
|-<br />
| [http://www.sle.sharp.co.uk/senseval2 SENSEVAL 2]<br />
| align="center" | 2001<br />
| Toulouse, France<br />
| - main link provides links to results, data, system descriptions, task descriptions, and workshop program <br /> - copy of website at [http://web.archive.org/web/20050507011044/http://www.sle.sharp.co.uk/senseval2/ Internet Archive]<br />
|-<br />
| [http://www.itri.brighton.ac.uk/events/senseval/ARCHIVE/index.html SENSEVAL 1]<br />
| align="center" | 1998<br />
| East Sussex, UK<br />
| - papers in [http://www.springerlink.com/content/0010-4817/34/1-2/ Computers and the Humanities], subscribers or pay per view<br />
|-<br />
|}<br />
<br />
<br />
==Tasks in Semantic Evaluation==<br />
<br />
The major tasks in semantic evaluation include:<br />
* '''[[Word sense disambiguation]]''': WSD, lexical sample and all-words, the process of identifying which [[Word sense disambiguation|sense]] of a word (i.e. [[Semantics|meaning]]) is used in a [[Sentence (linguistics)|sentence]], when the word has multiple meanings ([[polysemy]]). The WSD task has two variants: "[[lexical sample task|lexical sample]]" and "[[all-words task|all words]]" task. The former comprises disambiguating the occurrences of a small sample of target words which were previously selected, while in the latter all the words in a piece of running text need to be disambiguated. Tasks have been performed for many languages. Tasks have covered disambiguation of nouns, verbs, adjectives, and prepositions.<br />
* '''Multi-lingual or cross-lingual word-sense disambiguation''': word senses are defined according to translation distinctions, e.g., a polysemous word in Japanese is translated differently in a given context. The WSD task provides texts with target words and requires identification of the appropriate translation. A related task is cross-language information retrieval, where participants disambiguate in one language (e.g., with WordNet synsets) and retrieve documents in another language; standard information retrieval metrics are use to assess the quality of the disambiguation.<br />
* '''Subcategorization acquistion''': semantically similar verbs are similar in terms of subcategorization frames. The task is to use any available method for disambiguating verb senses, so that the results can then be fed into automatic methods used for acquiring subcategorization frames, with the hypothesis that the disambiguation will cluster the instances.<br />
* '''Semantic role labeling''': identifying and labeling constituents of sentences with their semantic roles. The basic task began with attempts to replicate FrameNet data, specifically frame elements. This task has expanded to inferring and developing new frames and frame elements, in individual sentences and in full running texts, with identification of intersentential links and coreference chains.<br />
* '''Word-sense induction''': comparison of sense-induction and discrimination systems. The task is to cluster corpus instances (word uses, rather than word senses) and to evaluate systems on how well they correspond to pre-existing sense inventories or to various sense mapping systems.<br />
* '''Semantic relation identification'''<br />
* '''Metonymy resolution'''<br />
* '''Temporal information processing'''<br />
* '''Lexical substitution'''<br />
* '''Evaluation of lexical resources'''<br />
* '''Coreference resolution'''<br />
* '''Sentiment analysis'''<br />
This list is expected to grow as the field progresses.<br />
<br />
==Organization==<br />
<br />
[http://www.clres.com/siglex.html SIGLEX, the ACL Special Interest Group on the Lexicon] is the umbrella organization for the SemEval semantic evaluations and the SENSEVAL word-sense evaluation exercises. [http://www.senseval.org/ SENSEVAL] is the home page for SENSEVAL 1-3. Each exercise is usually organized by two individuals, who make the call for tasks and handle the overall administration. Within the general guidelines, each task is then organized and run by individuals or groups.<br />
<br />
==SemEval on Wikipedia==<br />
<br />
On [http://en.wikipedia.org/wiki/Main_Page Wikipedia], a [http://en.wikipedia.org/wiki/SemEval SemEval] page had been created and it is calling for contributions and suggestions on how to improve the Wikipedia page and to further the understanding of computational semantics.<br />
<br />
==See also==<br />
<br />
* [[Semantics]]<br />
* [[Computational Semantics]]<br />
* [[Statistical Semantics]]<br />
* [[Semantics software for English]]<br />
<br />
<br />
[[Category:SemEval Portal]]</div>Kenskihttps://aclweb.org/aclwiki/index.php?title=SemEval_Portal&diff=8443SemEval Portal2010-11-19T03:53:08Z<p>Kenski: </p>
<hr />
<div>__FORCETOC__<br />
This page serves as a community portal for everything related to Semantic Evaluation ('''SemEval'''). <br />
==Semantic Evaluation Exercises==<br />
<br />
'''SemEval''' (Semantic Evaluation) is an ongoing series of evaluations of [[Semantic analysis (computational)|computational semantic analysis]] systems; it evolved from the Senseval [[Sense|Word sense]] evaluation series. The evaluations are intended to explore the nature of [[Meaning_(linguistics)|meaning]] in language. While meaning is intuitive to humans, transferring those intuitions to computational analysis has proved elusive.<br />
<br />
This series of evaluations is providing a mechanism to characterize in more precise terms exactly what is necessary to compute in meaning. As such, the evaluations provide an emergent mechanism to identify the problems and solutions for computations with meaning. These exercises have evolved to articulate more of the dimensions that are involved in our use of language. They began with apparently simple attempts to identify [[word sense]]s computationally. They have evolved to investigate the interrelationships among the elements in a sentence (e.g., [[semantic role labeling]]), relations between sentences (e.g., [[coreference]]), and the nature of what we are saying ([[semantic relations]] and [[sentiment analysis]]).<br />
<br />
The purpose of the SemEval exercises and [http://www.senseval.org SENSEVAL] is to evaluate semantic analysis systems. The first three evaluations, Senseval-1 through Senseval-3, were focused on word sense disambiguation, each time growing in the number of languages offered in the tasks and in the number of participating teams. Beginning with the 4th workshop, SemEval-2007 (SemEval-1), the nature of the tasks evolved to include semantic analysis tasks outside of word sense disambiguation. This portal will be used to provide a comprehensive view of the issues involved in semantic evaluations.<br />
<br />
==Upcoming and Past Events==<br />
<br />
{| border="1" cellpadding="7" cellspacing="0" <br />
|-<br />
! Event<br />
! Year<br />
! Location<br />
! Notes<br />
|-<br />
| [[SemEval 3]]<br />
| align="center" | to be determined<br />
| to be determined<br />
| - discussion at [http://groups.google.com/group/semeval3 SemEval 3 Group]<br />
|-<br />
| [http://semeval2.fbk.eu/semeval2.php SemEval 2]<br />
| align="center" | 2010<br />
| Uppsala, Sweden<br />
| - [http://aclweb.org/anthology-new/S/S10/ proceedings]<br />
|-<br />
| [http://nlp.cs.swarthmore.edu/semeval/index.php SemEval 1]<br />
| align="center" | 2007 <br />
| Prague, Czech Republic<br />
| - [http://aclweb.org/anthology-new/S/S07/ proceedings] <br /> - copy of website at [http://web.archive.org/web/20080727062358/http://nlp.cs.swarthmore.edu/semeval/index.php Internet Archive]<br />
|-<br />
| [http://www.senseval.org/senseval3 SENSEVAL 3]<br />
| align="center" | 2004<br />
| Barcelona, Spain<br />
| - [http://aclweb.org/anthology-new/W/W04/#0800 proceedings]<br />
|-<br />
| [http://www.sle.sharp.co.uk/senseval2 SENSEVAL 2]<br />
| align="center" | 2001<br />
| Toulouse, France<br />
| - main link provides links to results, data, system descriptions, task descriptions, and workshop program <br /> - copy of website at [http://web.archive.org/web/20050507011044/http://www.sle.sharp.co.uk/senseval2/ Internet Archive]<br />
|-<br />
| [http://www.itri.brighton.ac.uk/events/senseval/ARCHIVE/index.html SENSEVAL 1]<br />
| align="center" | 1998<br />
| East Sussex, UK<br />
| - papers in [http://www.springerlink.com/content/0010-4817/34/1-2/ Computers and the Humanities], subscribers or pay per view<br />
|-<br />
|}<br />
<br />
<br />
==Tasks in Semantic Evaluation==<br />
<br />
The major tasks in semantic evaluation include:<br />
*Word-sense disambiguation, WSD, (lexical sample and all-words): the process of identifying which [[word sense|sense]] of a word (i.e. [[meaning (linguistics)|meaning]]) is used in a [[Sentence (linguistics)|sentence]], when the word has multiple meanings ([[polysemy]]). WSD task has two variants: "[[lexical sample task|lexical sample]]" and "[[all-words task|all words]]" task. The former comprises disambiguating the occurrences of a small sample of target words which were previously selected, while in the latter all the words in a piece of running text need to be disambiguated.<br />
*Multi-lingual or cross-lingual word-sense disambiguation: word senses are defined according to translation distinctions, e.g., a polysemous word in Japanese is translated differently in a given context. The WSD task provides texts with target words and requires identification of the appropriate translation. A related task is cross-language information retrieval, where participants disambiguate in one language (e.g., with WordNet synsets) and retrieve documents in another language; standard information retrieval metrics are use to assess the quality of the disambiguation.<br />
*Subcategorization acquistion: semantically similar verbs are similar in terms of subcategorization frames. The task is to use any available method for disambiguating verb senses, so that the results can then be fed into automatic methods used for acquiring subcategorization frames, with the hypothesis that the disambiguation will cluster the instances.<br />
*Semantic role labeling<br />
*Word-sense induction<br />
*Semantic relation identification<br />
*Metonymy resolution<br />
*Temporal information processing<br />
*Lexical substitution<br />
*Evaluation of lexical resources<br />
*Coreference resolution<br />
*Sentiment analysis<br />
This list is expected to grow as the field progresses.<br />
<br />
==Organization==<br />
<br />
[http://www.clres.com/siglex.html SIGLEX, the ACL Special Interest Group on the Lexicon] is the umbrella organization for the SemEval semantic evaluations and the SENSEVAL word-sense evaluation exercises. [http://www.senseval.org/ SENSEVAL] is the home page for SENSEVAL 1-3. Each exercise is usually organized by two individuals, who make the call for tasks and handle the overall administration. Within the general guidelines, each task is then organized and run by individuals or groups.<br />
<br />
==SemEval on Wikipedia==<br />
<br />
On [http://en.wikipedia.org/wiki/Main_Page Wikipedia], a [http://en.wikipedia.org/wiki/SemEval SemEval] page had been created and it is calling for contributions and suggestions on how to improve the Wikipedia page and to further the understanding of computational semantics.<br />
<br />
==See also==<br />
<br />
* [[Semantics]]<br />
* [[Computational Semantics]]<br />
* [[Statistical Semantics]]<br />
* [[Semantics software for English]]<br />
<br />
<br />
[[Category:SemEval Portal]]</div>Kenskihttps://aclweb.org/aclwiki/index.php?title=SemEval_Portal&diff=8442SemEval Portal2010-11-19T03:12:11Z<p>Kenski: </p>
<hr />
<div>__FORCETOC__<br />
This page serves as a community portal for everything related to Semantic Evaluation ('''SemEval'''). <br />
==Semantic Evaluation Exercises==<br />
<br />
'''SemEval''' (Semantic Evaluation) is an ongoing series of evaluations of [[Semantic analysis (computational)|computational semantic analysis]] systems; it evolved from the Senseval [[Sense|Word sense]] evaluation series. The evaluations are intended to explore the nature of [[Meaning_(linguistics)|meaning]] in language. While meaning is intuitive to humans, transferring those intuitions to computational analysis has proved elusive.<br />
<br />
This series of evaluations is providing a mechanism to characterize in more precise terms exactly what is necessary to compute in meaning. As such, the evaluations provide an emergent mechanism to identify the problems and solutions for computations with meaning. These exercises have evolved to articulate more of the dimensions that are involved in our use of language. They began with apparently simple attempts to identify [[word sense]]s computationally. They have evolved to investigate the interrelationships among the elements in a sentence (e.g., [[semantic role labeling]]), relations between sentences (e.g., [[coreference]]), and the nature of what we are saying ([[semantic relations]] and [[sentiment analysis]]).<br />
<br />
The purpose of the SemEval exercises and [http://www.senseval.org SENSEVAL] is to evaluate semantic analysis systems. The first three evaluations, Senseval-1 through Senseval-3, were focused on word sense disambiguation, each time growing in the number of languages offered in the tasks and in the number of participating teams. Beginning with the 4th workshop, SemEval-2007 (SemEval-1), the nature of the tasks evolved to include semantic analysis tasks outside of word sense disambiguation. This portal will be used to provide a comprehensive view of the issues involved in semantic evaluations.<br />
<br />
==Upcoming and Past Events==<br />
<br />
{| border="1" cellpadding="7" cellspacing="0" <br />
|-<br />
! Event<br />
! Year<br />
! Location<br />
! Notes<br />
|-<br />
| [[SemEval 3]]<br />
| align="center" | to be determined<br />
| to be determined<br />
| - discussion at [http://groups.google.com/group/semeval3 SemEval 3 Group]<br />
|-<br />
| [http://semeval2.fbk.eu/semeval2.php SemEval 2]<br />
| align="center" | 2010<br />
| Uppsala, Sweden<br />
| - [http://aclweb.org/anthology-new/S/S10/ proceedings]<br />
|-<br />
| [http://nlp.cs.swarthmore.edu/semeval/index.php SemEval 1]<br />
| align="center" | 2007 <br />
| Prague, Czech Republic<br />
| - [http://aclweb.org/anthology-new/S/S07/ proceedings] <br /> - copy of website at [http://web.archive.org/web/20080727062358/http://nlp.cs.swarthmore.edu/semeval/index.php Internet Archive]<br />
|-<br />
| [http://www.senseval.org/senseval3 SENSEVAL 3]<br />
| align="center" | 2004<br />
| Barcelona, Spain<br />
| - [http://aclweb.org/anthology-new/W/W04/#0800 proceedings]<br />
|-<br />
| [http://www.sle.sharp.co.uk/senseval2 SENSEVAL 2]<br />
| align="center" | 2001<br />
| Toulouse, France<br />
| - main link provides links to results, data, system descriptions, task descriptions, and workshop program <br /> - copy of website at [http://web.archive.org/web/20050507011044/http://www.sle.sharp.co.uk/senseval2/ Internet Archive]<br />
|-<br />
| [http://www.itri.brighton.ac.uk/events/senseval/ARCHIVE/index.html SENSEVAL 1]<br />
| align="center" | 1998<br />
| East Sussex, UK<br />
| - papers in [http://www.springerlink.com/content/0010-4817/34/1-2/ Computers and the Humanities], subscribers or pay per view<br />
|-<br />
|}<br />
<br />
<br />
==Tasks in Semantic Evaluation==<br />
<br />
The major tasks in semantic evaluation include:<br />
*Word-sense disambiguation (lexical sample and all-words): the process of identifying which [[word sense|sense]] of a word (i.e. [[meaning (linguistics)|meaning]]) is used in a [[Sentence (linguistics)|sentence]], when the word has multiple meanings ([[polysemy]]). WSD task has two variants: "[[lexical sample task|lexical sample]]" and "[[all-words task|all words]]" task. The former comprises disambiguating the occurrences of a small sample of target words which were previously selected, while in the latter all the words in a piece of running text need to be disambiguated.<br />
*Multi-lingual or cross-lingual word-sense disambiguation<br />
*Subcategorization acquistion<br />
*Semantic role labeling<br />
*Word-sense induction<br />
*Semantic relation identification<br />
*Metonymy resolution<br />
*Temporal information processing<br />
*Lexical substitution<br />
*Evaluation of lexical resources<br />
*Coreference resolution<br />
*Sentiment analysis<br />
This list is expected to grow as the field progresses.<br />
<br />
==Organization==<br />
<br />
[http://www.clres.com/siglex.html SIGLEX, the ACL Special Interest Group on the Lexicon] is the umbrella organization for the SemEval semantic evaluations and the SENSEVAL word-sense evaluation exercises. [http://www.senseval.org/ SENSEVAL] is the home page for SENSEVAL 1-3. Each exercise is usually organized by two individuals, who make the call for tasks and handle the overall administration. Within the general guidelines, each task is then organized and run by individuals or groups.<br />
<br />
==SemEval on Wikipedia==<br />
<br />
On [http://en.wikipedia.org/wiki/Main_Page Wikipedia], a [http://en.wikipedia.org/wiki/SemEval SemEval] page had been created and it is calling for contributions and suggestions on how to improve the Wikipedia page and to further the understanding of computational semantics.<br />
<br />
==See also==<br />
<br />
* [[Semantics]]<br />
* [[Computational Semantics]]<br />
* [[Statistical Semantics]]<br />
* [[Semantics software for English]]<br />
<br />
<br />
[[Category:SemEval Portal]]</div>Kenskihttps://aclweb.org/aclwiki/index.php?title=SemEval_2012_versus_2013&diff=8337SemEval 2012 versus 20132010-11-01T20:01:34Z<p>Kenski: </p>
<hr />
<div>__FORCETOC__<br />
So far, SemEval has been organised in a 3 year cycle. However, many participants feel that this is a long wait. Many other shared tasks such as CoNLL and RTE run annually. For this reason we are giving the opportunity for task organisers to choose between a 2 year or a 3 year cycle. Task proposers will be asked to vote for the date of SemEval-3 and choose amongst:<br />
<br />
* 2012 - ACL<br />
* 2013 - NAACL<br />
* 2013 - ACL<br />
<br />
Related links:<br />
<br />
* [[SemEval 3]]<br />
* [[Draft Schedule for SemEval 3]]<br />
<br />
<br />
== Votes ==<br />
<br />
'''Deadline for votes:''' November 22, 2010<br />
<br />
''Increment the numbers below according to your preference.''<br />
<br />
'''For 2012:''' 2<br />
<br />
'''For 2013:''' 9<br />
<br />
: '''For 2013 NAACL:''' 0<br />
: '''For 2013 ACL:''' 0<br />
<br />
== Reasons for 2012 ==<br />
<br />
We still have almost 2 years! If its after 3 years, we probably won't be doing anything in the first year anyway. Hence, I vote for 2012. <br><br />
--<br />
Naushad UzZaman, University of Rochester (TempEval-2 participant and TempEval-3 co-organizer)<br />
<br />
== Reasons for 2013 ==<br />
<br />
I understand that some participants are eager to run in yet another "competition" sooner rather than later. This is no reason to believe that squeezing the cycle into two years serves a useful purpose. My perspective is that of an organizer. A new task requires much thought and even more legwork. An old task merely repeated is not worth the bytes its data sit in. There must be new elements. To reuse the old data is easier said than done. I could share our experience with tasks 4 (2007) and 8 (2010). There was a markedly higher effort in 2009-2010 than any of us had initially thought. If the community goes with the idea of a common annotation style, well, that alone requires a deeper reflection.<br />
<br />
I could go on, but you may already be bored. Let me just make a social observation. What we do is not a spat, a fisticuff or a race. It is a shared evaluation exercise. That many people treat is as a fight is painfully obvious. i suspect that it is not uncommon for someone to use the scores -- especially a showing close to the top -- as an argument in grant applications or requests for promotion. A stimulating intellectual challenge turns into (excusez le mot) a pissing contest. Naturally, it is better to have more chances to win that medal.<br />
<br />
I propose to keep the usual pace. A three-year cycle has worked well. It allows organizers to do their work carefully and thoughtfully, without overstraining themselves. Those who run annual events probably survive only because innovation is very incremental. <br />
<br />
--<br />
Stan Szpakowicz, PhD, Professor<br />
SITE, Computer Science, University of Ottawa<br />
<br />
-------------------------------------------------------<br />
As a task organiser from 2007 (task 10) and 2010 (task 2) I concur with Stan's sentiments. It does all depend on the thought that is required for the new approach/annotation and how much time the organisers have to spare. Another argument for a longer cycle is that it gives some time for analysis of the previous data before implementing the new ideas. I do agree with Suresh and Deniz (the current SemEval co-chairs) that the decision should rest with those who are willing to organise the tasks.<br />
<br />
--<br />
Diana McCarthy, (co-) Director<br />
Lexical Computing Ltd., Brighton UK<br />
<br />
---------------------------------------------------------<br />
<br />
As of the date of this posting (Oct 28, 2010), I would note that about 6? months has already passed since the results were submitted in the last SemEval. I think there is some argument for allowing ourselves a bit of breathing room, and also time to reflect on what happened last time before moving on to the next round.<br />
<br />
I particularly like the idea of having a workshop of some kind about one year after the conclusion of a SemEval to allow for more detailed discussion of what happened last time around, and might help to avoid too much repetition in tasks and also provide a nice opportunity for a more in depth discussion of lessons learned. Given that, I tend to prefer a 3 year window (particularly if this enables a lessons learned style workshop after 1 year). There was, for example, an ACL 2002 workshop after Senseval-2 (2001) that focused on "Recent Successes and Future Directions" and featured some papers that did a bit more analysis of results, etc. than is normally possible right after the event. I couldn't find the call for this event, but you can see the proceedings here :<br />
<br />
http://aclweb.org/anthology-new/W/W02/#0800<br />
<br />
I'd also be concerned that a 2 year window this time around might not be sustainable, and so we'd end up fluctuating a bit on the interval as the years go by. I think having it generally understood that SemEval will happen every 3 years is nice in that folks can generally plan on it. This can be helpful when working with or planning to work with students or others on a fixed time interval.<br />
<br />
Ted Pedersen<br />
<br />
http://www.d.umn.edu/~tpederse<br />
<br />
task participant 2001, 2004, 2007, 2010 and task organizer 2004<br />
<br />
------------------------------------------------------------<br />
<br />
In my opinion, *before* one can design a proposal, one needs to know the timeline in advance: is it 2-year or 3-year?<br />
<br />
One could plan a task very differently for a 2-year and for a 3-year cycle:<br />
(a) For a 2-year cycle, given the rush, there would be a tendency to repeat an old task with some minor changes, which is of questionable utility.<br />
(b) For a 3-year cycle, one could think of something really new and interesting; this would require a careful task design (which involves much discussions and fighting), for which one month would not be enough.<br />
<br />
---<br />
Preslav Nakov, Ph.D.<br />
National University of Singapore<br />
http://nakov.eu<br />
<br />
---------------------------------------------------------<br />
<br />
As a co-organiser of two Semeval-2007 tasks I can say that you need to a) carefully think about what to do next (you want your task to allow for something new), b) choose the dataset(s), c) build an annotation interface, d) annotate, e) think about how to evaluate and prepare an evaluation software. Especially point (a) needs time, so devising a new task that it not derivative takes time and thinking. Also, I agree that analyzing the results of previous tasks and meeting to talk about that is an important step before developing new tasks and ideas. So, if it wasn't clear before :-), I am all for Semeval 2013!<br />
<br />
---<br />
Roberto Navigli<br />
SAPIENZA Universita' di Roma<br />
http://www.dsi.uniroma1.it/~navigli<br />
<br />
------------------------------------------------------------<br />
<br />
I have been participating in Senseval/Semeval since the beginning, as a participant and as an organizer. I have two major points to make: <br />
(1) As currently envisaged, the identification of a task occurs in the mind of a single individual or small coordinated group. As a result, the task is somewhat random. This year's organizers have made the strong point about combining tasks, so as to have several subtasks that have some coherence. I would suggest that we all need some overall coherence about outstanding semantic evaluation issues, with the hope that the tasks and subtasks are homing in on those problems. I made a start on this with a quick grouping of task types at http://aclweb.org/aclwiki/index.php?title=SemEval_Portal. I would like to encourage people to exapnd on this grouping with a greater identification of issues. This might facilitate the development of future tasks.<br />
(2) With the given structure, a task is defined and then somewhat cast in concrete. As both a participant and an organizer, this results in a major difficulty that the task can't evolve and play out. As an organizer, I've found it necessary to have some back and forth before the final form of a task is crystallized. As a participant, it takes some time to play around with trial data and to wrap oneself around the task. The FrameNet linking task in SemEval-2010 is a case in point. It was sufficiently complex that many of those with an initial interest ended up not participating and the original task was subdivided into a couple of smaller tasks. So, what I'm suggesting is that potential organizers throw out an initial, rough conceptualization with some trial data and that the task not be locked down without a full airing of potential issues and refinements. I don't think it's necessary to have a lot of time between the availability of the training data and submission of results (5 months). Once you're pretty sure that you can get the kind of results you want, it's just a matter of making the final run.<br />
<br />
Ken Litkowski<br />
CL Research <br />
http://www.clres.com<br />
<br />
[[Category:SemEval Portal]]</div>Kenskihttps://aclweb.org/aclwiki/index.php?title=SemEval_2012_versus_2013&diff=8336SemEval 2012 versus 20132010-11-01T19:30:47Z<p>Kenski: </p>
<hr />
<div>__FORCETOC__<br />
So far, SemEval has been organised in a 3 year cycle. However, many participants feel that this is a long wait. Many other shared tasks such as CoNLL and RTE run annually. For this reason we are giving the opportunity for task organisers to choose between a 2 year or a 3 year cycle. Task proposers will be asked to vote for the date of SemEval-3 and choose amongst:<br />
<br />
* 2012 - ACL<br />
* 2013 - NAACL<br />
* 2013 - ACL<br />
<br />
Related links:<br />
<br />
* [[SemEval 3]]<br />
* [[Draft Schedule for SemEval 3]]<br />
<br />
<br />
== Votes ==<br />
<br />
'''Deadline for votes:''' November 22, 2010<br />
<br />
''Increment the numbers below according to your preference.''<br />
<br />
'''For 2012:''' 2<br />
<br />
'''For 2013:''' 9<br />
<br />
: '''For 2013 NAACL:''' 0<br />
: '''For 2013 ACL:''' 0<br />
<br />
== Reasons for 2012 ==<br />
<br />
We still have almost 2 years! If its after 3 years, we probably won't be doing anything in the first year anyway. Hence, I vote for 2012. <br><br />
--<br />
Naushad UzZaman, University of Rochester (TempEval-2 participant and TempEval-3 co-organizer)<br />
<br />
== Reasons for 2013 ==<br />
<br />
I understand that some participants are eager to run in yet another "competition" sooner rather than later. This is no reason to believe that squeezing the cycle into two years serves a useful purpose. My perspective is that of an organizer. A new task requires much thought and even more legwork. An old task merely repeated is not worth the bytes its data sit in. There must be new elements. To reuse the old data is easier said than done. I could share our experience with tasks 4 (2007) and 8 (2010). There was a markedly higher effort in 2009-2010 than any of us had initially thought. If the community goes with the idea of a common annotation style, well, that alone requires a deeper reflection.<br />
<br />
I could go on, but you may already be bored. Let me just make a social observation. What we do is not a spat, a fisticuff or a race. It is a shared evaluation exercise. That many people treat is as a fight is painfully obvious. i suspect that it is not uncommon for someone to use the scores -- especially a showing close to the top -- as an argument in grant applications or requests for promotion. A stimulating intellectual challenge turns into (excusez le mot) a pissing contest. Naturally, it is better to have more chances to win that medal.<br />
<br />
I propose to keep the usual pace. A three-year cycle has worked well. It allows organizers to do their work carefully and thoughtfully, without overstraining themselves. Those who run annual events probably survive only because innovation is very incremental. <br />
<br />
--<br />
Stan Szpakowicz, PhD, Professor<br />
SITE, Computer Science, University of Ottawa<br />
<br />
-------------------------------------------------------<br />
As a task organiser from 2007 (task 10) and 2010 (task 2) I concur with Stan's sentiments. It does all depend on the thought that is required for the new approach/annotation and how much time the organisers have to spare. Another argument for a longer cycle is that it gives some time for analysis of the previous data before implementing the new ideas. I do agree with Suresh and Deniz (the current SemEval co-chairs) that the decision should rest with those who are willing to organise the tasks.<br />
<br />
--<br />
Diana McCarthy, (co-) Director<br />
Lexical Computing Ltd., Brighton UK<br />
<br />
---------------------------------------------------------<br />
<br />
As of the date of this posting (Oct 28, 2010), I would note that about 6? months has already passed since the results were submitted in the last SemEval. I think there is some argument for allowing ourselves a bit of breathing room, and also time to reflect on what happened last time before moving on to the next round.<br />
<br />
I particularly like the idea of having a workshop of some kind about one year after the conclusion of a SemEval to allow for more detailed discussion of what happened last time around, and might help to avoid too much repetition in tasks and also provide a nice opportunity for a more in depth discussion of lessons learned. Given that, I tend to prefer a 3 year window (particularly if this enables a lessons learned style workshop after 1 year). There was, for example, an ACL 2002 workshop after Senseval-2 (2001) that focused on "Recent Successes and Future Directions" and featured some papers that did a bit more analysis of results, etc. than is normally possible right after the event. I couldn't find the call for this event, but you can see the proceedings here :<br />
<br />
http://aclweb.org/anthology-new/W/W02/#0800<br />
<br />
I'd also be concerned that a 2 year window this time around might not be sustainable, and so we'd end up fluctuating a bit on the interval as the years go by. I think having it generally understood that SemEval will happen every 3 years is nice in that folks can generally plan on it. This can be helpful when working with or planning to work with students or others on a fixed time interval.<br />
<br />
Ted Pedersen<br />
<br />
http://www.d.umn.edu/~tpederse<br />
<br />
task participant 2001, 2004, 2007, 2010 and task organizer 2004<br />
<br />
------------------------------------------------------------<br />
<br />
In my opinion, *before* one can design a proposal, one needs to know the timeline in advance: is it 2-year or 3-year?<br />
<br />
One could plan a task very differently for a 2-year and for a 3-year cycle:<br />
(a) For a 2-year cycle, given the rush, there would be a tendency to repeat an old task with some minor changes, which is of questionable utility.<br />
(b) For a 3-year cycle, one could think of something really new and interesting; this would require a careful task design (which involves much discussions and fighting), for which one month would not be enough.<br />
<br />
---<br />
Preslav Nakov, Ph.D.<br />
National University of Singapore<br />
http://nakov.eu<br />
<br />
---------------------------------------------------------<br />
<br />
As a co-organiser of two Semeval-2007 tasks I can say that you need to a) carefully think about what to do next (you want your task to allow for something new), b) choose the dataset(s), c) build an annotation interface, d) annotate, e) think about how to evaluate and prepare an evaluation software. Especially point (a) needs time, so devising a new task that it not derivative takes time and thinking. Also, I agree that analyzing the results of previous tasks and meeting to talk about that is an important step before developing new tasks and ideas. So, if it wasn't clear before :-), I am all for Semeval 2013!<br />
<br />
---<br />
Roberto Navigli<br />
SAPIENZA Universita' di Roma<br />
http://www.dsi.uniroma1.it/~navigli<br />
<br />
[[Category:SemEval Portal]]</div>Kenskihttps://aclweb.org/aclwiki/index.php?title=SemEval_Portal&diff=8236SemEval Portal2010-10-11T21:47:56Z<p>Kenski: </p>
<hr />
<div>__FORCETOC__<br />
This page serves as a community portal for everything related to Semantic Evaluation ('''SemEval'''). <br />
==Semantic Evaluation Exercises==<br />
<br />
The purpose of the SemEval exercises and [http://www.senseval.org SENSEVAL] is to evaluate semantic analysis systems. The first three evaluations, Senseval-1 through Senseval-3, were focused on word sense disambiguation, each time growing in the number of languages offered in the tasks and in the number of participating teams. Beginning with the 4th workshop, SemEval-2007 (SemEval-1), the nature of the tasks evolved to include semantic analysis tasks outside of word sense disambiguation. This portal will be used to provide a comprehensive view of the issues involved in semantic evaluations.<br />
<br />
==Upcoming and Past Events==<br />
<br />
{| border="1" cellpadding="7" cellspacing="0" <br />
|-<br />
! Event<br />
! Year<br />
! Location<br />
! Notes<br />
|-<br />
| [[SemEval 3]]<br />
| align="center" | to be determined<br />
| to be determined<br />
| - discussion at [http://groups.google.com/group/semeval3 SemEval 3 Group]<br />
|-<br />
| [http://semeval2.fbk.eu/semeval2.php SemEval 2]<br />
| align="center" | 2010<br />
| Uppsala, Sweden<br />
| - [http://aclweb.org/anthology-new/S/S10/ proceedings]<br />
|-<br />
| [http://nlp.cs.swarthmore.edu/semeval/index.php SemEval 1]<br />
| align="center" | 2007 <br />
| Prague, Czech Republic<br />
| - [http://aclweb.org/anthology-new/S/S07/ proceedings] <br /> - copy of website at [http://web.archive.org/web/20080727062358/http://nlp.cs.swarthmore.edu/semeval/index.php Internet Archive]<br />
|-<br />
| [http://www.senseval.org/senseval3 SENSEVAL 3]<br />
| align="center" | 2004<br />
| Barcelona, Spain<br />
| - [http://aclweb.org/anthology-new/W/W04/#0800 proceedings]<br />
|-<br />
| [http://www.sle.sharp.co.uk/senseval2 SENSEVAL 2]<br />
| align="center" | 2001<br />
| Toulouse, France<br />
| - main link provides links to results, data, system descriptions, task descriptions, and workshop program <br /> - copy of website at [http://web.archive.org/web/20050507011044/http://www.sle.sharp.co.uk/senseval2/ Internet Archive]<br />
|-<br />
| [http://www.itri.brighton.ac.uk/events/senseval/ARCHIVE/index.html SENSEVAL 1]<br />
| align="center" | 1998<br />
| East Sussex, UK<br />
| - papers in [http://www.springerlink.com/content/0010-4817/34/1-2/ Computers and the Humanities], subscribers or pay per view<br />
|-<br />
|}<br />
<br />
<br />
==Tasks in Semantic Evaluation==<br />
<br />
The major tasks in semantic evaluation are:<br />
*Word-sense disambiguation (lexical sample)<br />
*Word-sense disambiguation (all-words)<br />
*Multi-lingual or cross-lingual word-sense disambiguation<br />
*Subcategorization acquistion<br />
*Semantic role labeling<br />
*Word-sense induction<br />
*Semantic relation identification<br />
*Metonymy resolution<br />
*Temporal relation identification<br />
*Lexical substitution<br />
*Evaluation of lexical resources<br />
*Coreference resolution<br />
*Sentiment analysis<br />
<br />
==Organization==<br />
<br />
[http://www.clres.com/siglex.html SIGLEX, the ACL Special Interest Group on the Lexicon] is the umbrella organization for the SemEval semantic evaluations and the SENSEVAL word-sense evaluation exercises. [http://www.senseval.org/ SENSEVAL] is the home page for SENSEVAL 1-3. Each exercise is usually organized by two individuals, who make the call for tasks and handle the overall administration. Within the general guidelines, each task is then organized and run by individuals or groups.<br />
<br />
[[Category:SemEval Portal]]</div>Kenskihttps://aclweb.org/aclwiki/index.php?title=SemEval_Portal&diff=8235SemEval Portal2010-10-11T21:14:00Z<p>Kenski: </p>
<hr />
<div>__FORCETOC__<br />
This page serves as a community portal for everything related to Semantic Evaluation ('''SemEval'''). <br />
==Semantic Evaluation Exercises==<br />
<br />
The purpose of the SemEval exercises and [http://www.senseval.org SENSEVAL] is to evaluate semantic analysis systems. The first three evaluations, Senseval-1 through Senseval-3, were focused on word sense disambiguation, each time growing in the number of languages offered in the tasks and in the number of participating teams. Beginning with the 4th workshop, SemEval-2007 (SemEval-1), the nature of the tasks evolved to include semantic analysis tasks outside of word sense disambiguation. This portal will be used to provide a comprehensive view of the issues involved in semantic evaluations.<br />
<br />
==Upcoming and Past Events==<br />
<br />
{| border="1" cellpadding="7" cellspacing="0" <br />
|-<br />
! Event<br />
! Year<br />
! Location<br />
! Notes<br />
|-<br />
| [[SemEval 3]]<br />
| align="center" | to be determined<br />
| to be determined<br />
| - discussion at [http://groups.google.com/group/semeval3 SemEval 3 Group]<br />
|-<br />
| [http://semeval2.fbk.eu/semeval2.php SemEval 2]<br />
| align="center" | 2010<br />
| Uppsala, Sweden<br />
| - [http://aclweb.org/anthology-new/S/S10/ proceedings]<br />
|-<br />
| [http://nlp.cs.swarthmore.edu/semeval/index.php SemEval 1]<br />
| align="center" | 2007 <br />
| Prague, Czech Republic<br />
| - [http://aclweb.org/anthology-new/S/S07/ proceedings] <br /> - copy of website at [http://web.archive.org/web/20080727062358/http://nlp.cs.swarthmore.edu/semeval/index.php Internet Archive]<br />
|-<br />
| [http://www.senseval.org/senseval3 SENSEVAL 3]<br />
| align="center" | 2004<br />
| Barcelona, Spain<br />
| - [http://aclweb.org/anthology-new/W/W04/#0800 proceedings]<br />
|-<br />
| [http://www.sle.sharp.co.uk/senseval2 SENSEVAL 2]<br />
| align="center" | 2001<br />
| Toulouse, France<br />
| - main link provides links to results, data, system descriptions, task descriptions, and workshop program <br /> - copy of website at [http://web.archive.org/web/20050507011044/http://www.sle.sharp.co.uk/senseval2/ Internet Archive]<br />
|-<br />
| [http://www.itri.brighton.ac.uk/events/senseval/ARCHIVE/index.html SENSEVAL 1]<br />
| align="center" | 1998<br />
| East Sussex, UK<br />
| - papers in [http://www.springerlink.com/content/0010-4817/34/1-2/ Computers and the Humanities], subscribers or pay per view<br />
|-<br />
|}<br />
<br />
<br />
==Issues in Semantic Evaluation==<br />
<br />
This section will identify the major issues in word-sense disambiguation and other semantic evaluation issues.<br />
<br />
==Organization==<br />
<br />
[http://www.clres.com/siglex.html SIGLEX, the ACL Special Interest Group on the Lexicon] is the umbrella organization for the SemEval semantic evaluations and the SENSEVAL word-sense evaluation exercises. [http://www.senseval.org/ SENSEVAL] is the home page for SENSEVAL 1-3. Each exercise is usually organized by two individuals, who make the call for tasks and handle the overall administration. Within the general guidelines, each task is then organized and run by individuals or groups.<br />
<br />
[[Category:SemEval Portal]]</div>Kenskihttps://aclweb.org/aclwiki/index.php?title=SemEval_Portal&diff=8234SemEval Portal2010-10-11T21:11:38Z<p>Kenski: </p>
<hr />
<div>__FORCETOC__<br />
This page serves as a community portal for everything related to Semantic Evaluation (''SemEval''). <br />
==Semantic Evaluation Exercises==<br />
<br />
The purpose of the SemEval exercises and [http://www.senseval.org SENSEVAL] is to evaluate semantic analysis systems. The first three evaluations, Senseval-1 through Senseval-3, were focused on word sense disambiguation, each time growing in the number of languages offered in the tasks and in the number of participating teams. Beginning with the 4th workshop, SemEval-2007 (SemEval-1), the nature of the tasks evolved to include semantic analysis tasks outside of word sense disambiguation. This portal will be used to provide a comprehensive view of the issues involved in semantic evaluations.<br />
<br />
==Upcoming and Past Events==<br />
<br />
{| border="1" cellpadding="7" cellspacing="0" <br />
|-<br />
! Event<br />
! Year<br />
! Location<br />
! Notes<br />
|-<br />
| [[SemEval 3]]<br />
| align="center" | to be determined<br />
| to be determined<br />
| - discussion at [http://groups.google.com/group/semeval3 SemEval 3 Group]<br />
|-<br />
| [http://semeval2.fbk.eu/semeval2.php SemEval 2]<br />
| align="center" | 2010<br />
| Uppsala, Sweden<br />
| - [http://aclweb.org/anthology-new/S/S10/ proceedings]<br />
|-<br />
| [http://nlp.cs.swarthmore.edu/semeval/index.php SemEval 1]<br />
| align="center" | 2007 <br />
| Prague, Czech Republic<br />
| - [http://aclweb.org/anthology-new/S/S07/ proceedings] <br /> - copy of website at [http://web.archive.org/web/20080727062358/http://nlp.cs.swarthmore.edu/semeval/index.php Internet Archive]<br />
|-<br />
| [http://www.senseval.org/senseval3 SENSEVAL 3]<br />
| align="center" | 2004<br />
| Barcelona, Spain<br />
| - [http://aclweb.org/anthology-new/W/W04/#0800 proceedings]<br />
|-<br />
| [http://www.sle.sharp.co.uk/senseval2 SENSEVAL 2]<br />
| align="center" | 2001<br />
| Toulouse, France<br />
| - main link provides links to results, data, system descriptions, task descriptions, and workshop program <br /> - copy of website at [http://web.archive.org/web/20050507011044/http://www.sle.sharp.co.uk/senseval2/ Internet Archive]<br />
|-<br />
| [http://www.itri.brighton.ac.uk/events/senseval/ARCHIVE/index.html SENSEVAL 1]<br />
| align="center" | 1998<br />
| East Sussex, UK<br />
| - papers in [http://www.springerlink.com/content/0010-4817/34/1-2/ Computers and the Humanities], subscribers or pay per view<br />
|-<br />
|}<br />
<br />
<br />
==Issues in Semantic Evaluation==<br />
<br />
This section will identify the major issues in word-sense disambiguation and other semantic evaluation issues.<br />
<br />
==Organization==<br />
<br />
[http://www.clres.com/siglex.html SIGLEX, the ACL Special Interest Group on the Lexicon] is the umbrella organization for the SemEval semantic evaluations and the SENSEVAL word-sense evaluation exercises. [http://www.senseval.org/ SENSEVAL] is the home page for SENSEVAL 1-3. Each exercise is usually organized by two individuals, who make the call for tasks and handle the overall administration. Within the general guidelines, each task is then organized and run by individuals or groups.<br />
<br />
[[Category:SemEval Portal]]</div>Kenskihttps://aclweb.org/aclwiki/index.php?title=SemEval_Portal&diff=8233SemEval Portal2010-10-11T21:11:05Z<p>Kenski: </p>
<hr />
<div>__FORCETOC__<br />
This page serves as a community portal for everything related to Semantic Evaluation (SemEval). <br />
==Semantic Evaluation Exercises==<br />
<br />
The purpose of the SemEval exercises and [http://www.senseval.org SENSEVAL] is to evaluate semantic analysis systems. The first three evaluations, Senseval-1 through Senseval-3, were focused on word sense disambiguation, each time growing in the number of languages offered in the tasks and in the number of participating teams. Beginning with the 4th workshop, SemEval-2007 (SemEval-1), the nature of the tasks evolved to include semantic analysis tasks outside of word sense disambiguation. This portal will be used to provide a comprehensive view of the issues involved in semantic evaluations.<br />
<br />
==Upcoming and Past Events==<br />
<br />
{| border="1" cellpadding="7" cellspacing="0" <br />
|-<br />
! Event<br />
! Year<br />
! Location<br />
! Notes<br />
|-<br />
| [[SemEval 3]]<br />
| align="center" | to be determined<br />
| to be determined<br />
| - discussion at [http://groups.google.com/group/semeval3 SemEval 3 Group]<br />
|-<br />
| [http://semeval2.fbk.eu/semeval2.php SemEval 2]<br />
| align="center" | 2010<br />
| Uppsala, Sweden<br />
| - [http://aclweb.org/anthology-new/S/S10/ proceedings]<br />
|-<br />
| [http://nlp.cs.swarthmore.edu/semeval/index.php SemEval 1]<br />
| align="center" | 2007 <br />
| Prague, Czech Republic<br />
| - [http://aclweb.org/anthology-new/S/S07/ proceedings] <br /> - copy of website at [http://web.archive.org/web/20080727062358/http://nlp.cs.swarthmore.edu/semeval/index.php Internet Archive]<br />
|-<br />
| [http://www.senseval.org/senseval3 SENSEVAL 3]<br />
| align="center" | 2004<br />
| Barcelona, Spain<br />
| - [http://aclweb.org/anthology-new/W/W04/#0800 proceedings]<br />
|-<br />
| [http://www.sle.sharp.co.uk/senseval2 SENSEVAL 2]<br />
| align="center" | 2001<br />
| Toulouse, France<br />
| - main link provides links to results, data, system descriptions, task descriptions, and workshop program <br /> - copy of website at [http://web.archive.org/web/20050507011044/http://www.sle.sharp.co.uk/senseval2/ Internet Archive]<br />
|-<br />
| [http://www.itri.brighton.ac.uk/events/senseval/ARCHIVE/index.html SENSEVAL 1]<br />
| align="center" | 1998<br />
| East Sussex, UK<br />
| - papers in [http://www.springerlink.com/content/0010-4817/34/1-2/ Computers and the Humanities], subscribers or pay per view<br />
|-<br />
|}<br />
<br />
<br />
==Issues in Semantic Evaluation==<br />
<br />
This section will identify the major issues in word-sense disambiguation and other semantic evaluation issues.<br />
<br />
==Organization==<br />
<br />
[http://www.clres.com/siglex.html SIGLEX, the ACL Special Interest Group on the Lexicon] is the umbrella organization for the SemEval semantic evaluations and the SENSEVAL word-sense evaluation exercises. [http://www.senseval.org/ SENSEVAL] is the home page for SENSEVAL 1-3. Each exercise is usually organized by two individuals, who make the call for tasks and handle the overall administration. Within the general guidelines, each task is then organized and run by individuals or groups.<br />
<br />
[[Category:SemEval Portal]]</div>Kenskihttps://aclweb.org/aclwiki/index.php?title=SemEval_Portal&diff=8232SemEval Portal2010-10-11T21:08:52Z<p>Kenski: </p>
<hr />
<div>__FORCETOC__<br />
<br />
==Semantic Evaluation Exercises==<br />
<br />
The purpose of the SemEval exercises and [http://www.senseval.org SENSEVAL] is to evaluate semantic analysis systems. The first three evaluations, Senseval-1 through Senseval-3, were focused on word sense disambiguation, each time growing in the number of languages offered in the tasks and in the number of participating teams. Beginning with the 4th workshop, SemEval-2007 (SemEval-1), the nature of the tasks evolved to include semantic analysis tasks outside of word sense disambiguation. This portal will be used to provide a comprehensive view of the issues involved in semantic evaluations.<br />
<br />
==Upcoming and Past Events==<br />
<br />
{| border="1" cellpadding="7" cellspacing="0" <br />
|-<br />
! Event<br />
! Year<br />
! Location<br />
! Notes<br />
|-<br />
| [[SemEval 3]]<br />
| align="center" | to be determined<br />
| to be determined<br />
| - discussion at [http://groups.google.com/group/semeval3 SemEval 3 Group]<br />
|-<br />
| [http://semeval2.fbk.eu/semeval2.php SemEval 2]<br />
| align="center" | 2010<br />
| Uppsala, Sweden<br />
| - [http://aclweb.org/anthology-new/S/S10/ proceedings]<br />
|-<br />
| [http://nlp.cs.swarthmore.edu/semeval/index.php SemEval 1]<br />
| align="center" | 2007 <br />
| Prague, Czech Republic<br />
| - [http://aclweb.org/anthology-new/S/S07/ proceedings] <br /> - copy of website at [http://web.archive.org/web/20080727062358/http://nlp.cs.swarthmore.edu/semeval/index.php Internet Archive]<br />
|-<br />
| [http://www.senseval.org/senseval3 SENSEVAL 3]<br />
| align="center" | 2004<br />
| Barcelona, Spain<br />
| - [http://aclweb.org/anthology-new/W/W04/#0800 proceedings]<br />
|-<br />
| [http://www.sle.sharp.co.uk/senseval2 SENSEVAL 2]<br />
| align="center" | 2001<br />
| Toulouse, France<br />
| - main link provides links to results, data, system descriptions, task descriptions, and workshop program <br /> - copy of website at [http://web.archive.org/web/20050507011044/http://www.sle.sharp.co.uk/senseval2/ Internet Archive]<br />
|-<br />
| [http://www.itri.brighton.ac.uk/events/senseval/ARCHIVE/index.html SENSEVAL 1]<br />
| align="center" | 1998<br />
| East Sussex, UK<br />
| - papers in [http://www.springerlink.com/content/0010-4817/34/1-2/ Computers and the Humanities], subscribers or pay per view<br />
|-<br />
|}<br />
<br />
<br />
==Issues in Semantic Evaluation==<br />
<br />
This section will identify the major issues in word-sense disambiguation and other semantic evaluation issues.<br />
<br />
==Organization==<br />
<br />
[http://www.clres.com/siglex.html SIGLEX, the ACL Special Interest Group on the Lexicon] is the umbrella organization for the SemEval semantic evaluations and the SENSEVAL word-sense evaluation exercises. [http://www.senseval.org/ SENSEVAL] is the home page for SENSEVAL 1-3. Each exercise is usually organized by two individuals, who make the call for tasks and handle the overall administration. Within the general guidelines, each task is then organized and run by individuals or groups.<br />
<br />
[[Category:SemEval Portal]]</div>Kenskihttps://aclweb.org/aclwiki/index.php?title=SemEval_Portal&diff=8231SemEval Portal2010-10-11T20:57:18Z<p>Kenski: </p>
<hr />
<div>__FORCETOC__<br />
==Semantic Evaluation Exercises==<br />
<br />
The purpose of the [http://www.senseval.org SENSEVAL] and SemEval exercises is to evaluate semantic analysis systems. The first three evaluations, Senseval-1 through Senseval-3, were focused on word sense disambiguation, each time growing in the number of languages offered in the tasks and in the number of participating teams. Beginning with the 4th workshop, SemEval-2007, the nature of the tasks evolved to include semantic analysis tasks outside of word sense disambiguation. [http://www.clres.com/siglex.html SIGLEX] is the umbrella organization for the SemEval semantic evaluations and the SENSEVAL word-sense evaluation exercises. This portal will be used to provide a comprehensive view of the issues involved in semantic evaluations. Initially, the portal provides links to the current and past exercises.<br />
<br />
* [http://www.clres.com/siglex.html SIGLEX] is the umbrella organization for the SemEval semantic evaluations and the SENSEVAL word-sense evaluation exercises.<br />
* [http://www.senseval.org/ SENSEVAL] is the home page for SENSEVAL 1-3<br />
<br />
The purpose of the [http://www.senseval.org SENSEVAL] and SemEval exercises is to evaluate semantic analysis systems. The first three evaluations, Senseval-1 through Senseval-3, were focused on word sense disambiguation, each time growing in the number of languages offered in the tasks and in the number of participating teams. Beginning with the 4th workshop, SemEval-2007, the nature of the tasks evolved to include semantic analysis tasks outside of word sense disambiguation.<br />
<br />
==Upcoming and Past Events==<br />
<br />
{| border="1" cellpadding="7" cellspacing="0" <br />
|-<br />
! Event<br />
! Year<br />
! Location<br />
! Notes<br />
|-<br />
| [[SemEval 3]]<br />
| align="center" | to be determined<br />
| to be determined<br />
| - discussion at [http://groups.google.com/group/semeval3 SemEval 3 Group]<br />
|-<br />
| [http://semeval2.fbk.eu/semeval2.php SemEval 2]<br />
| align="center" | 2010<br />
| Uppsala, Sweden<br />
| - [http://aclweb.org/anthology-new/S/S10/ proceedings]<br />
|-<br />
| [http://nlp.cs.swarthmore.edu/semeval/index.php SemEval 1]<br />
| align="center" | 2007 <br />
| Prague, Czech Republic<br />
| - [http://aclweb.org/anthology-new/S/S07/ proceedings] <br /> - copy of website at [http://web.archive.org/web/20080727062358/http://nlp.cs.swarthmore.edu/semeval/index.php Internet Archive]<br />
|-<br />
| [http://www.senseval.org/senseval3 SENSEVAL 3]<br />
| align="center" | 2004<br />
| Barcelona, Spain<br />
| - [http://aclweb.org/anthology-new/W/W04/#0800 proceedings]<br />
|-<br />
| [http://www.sle.sharp.co.uk/senseval2 SENSEVAL 2]<br />
| align="center" | 2001<br />
| Toulouse, France<br />
| - main link provides links to results, data, system descriptions, task descriptions, and workshop program <br /> - copy of website at [http://web.archive.org/web/20050507011044/http://www.sle.sharp.co.uk/senseval2/ Internet Archive]<br />
|-<br />
| [http://www.itri.brighton.ac.uk/events/senseval/ARCHIVE/index.html SENSEVAL 1]<br />
| align="center" | 1998<br />
| East Sussex, UK<br />
| - papers in [http://www.springerlink.com/content/0010-4817/34/1-2/ Computers and the Humanities], subscribers or pay per view<br />
|-<br />
|}<br />
<br />
<br />
==Issues in Semantic Evaluation==<br />
<br />
This section will identify the major issues in word-sense disambiguation and other semantic evaluation issues.<br />
<br />
[[Category:SemEval Portal]]</div>Kenskihttps://aclweb.org/aclwiki/index.php?title=SemEval_Portal&diff=8230SemEval Portal2010-10-11T20:52:35Z<p>Kenski: </p>
<hr />
<div>==Semantic Evaluation Exercises==<br />
<br />
The purpose of the [http://www.senseval.org SENSEVAL] and SemEval exercises is to evaluate semantic analysis systems. The first three evaluations, Senseval-1 through Senseval-3, were focused on word sense disambiguation, each time growing in the number of languages offered in the tasks and in the number of participating teams. Beginning with the 4th workshop, SemEval-2007, the nature of the tasks evolved to include semantic analysis tasks outside of word sense disambiguation. [http://www.clres.com/siglex.html SIGLEX] is the umbrella organization for the SemEval semantic evaluations and the SENSEVAL word-sense evaluation exercises. This portal will be used to provide a comprehensive view of the issues involved in semantic evaluations. Initially, the portal provides links to the current and past exercises.<br />
<br />
* [http://www.clres.com/siglex.html SIGLEX] is the umbrella organization for the SemEval semantic evaluations and the SENSEVAL word-sense evaluation exercises.<br />
* [http://www.senseval.org/ SENSEVAL] is the home page for SENSEVAL 1-3<br />
<br />
The purpose of the [http://www.senseval.org SENSEVAL] and SemEval exercises is to evaluate semantic analysis systems. The first three evaluations, Senseval-1 through Senseval-3, were focused on word sense disambiguation, each time growing in the number of languages offered in the tasks and in the number of participating teams. Beginning with the 4th workshop, SemEval-2007, the nature of the tasks evolved to include semantic analysis tasks outside of word sense disambiguation.<br />
<br />
==Upcoming and Past Events==<br />
<br />
{| border="1" cellpadding="7" cellspacing="0" <br />
|-<br />
! Event<br />
! Year<br />
! Location<br />
! Notes<br />
|-<br />
| [[SemEval 3]]<br />
| align="center" | to be determined<br />
| to be determined<br />
| - discussion at [http://groups.google.com/group/semeval3 SemEval 3 Group]<br />
|-<br />
| [http://semeval2.fbk.eu/semeval2.php SemEval 2]<br />
| align="center" | 2010<br />
| Uppsala, Sweden<br />
| - [http://aclweb.org/anthology-new/S/S10/ proceedings]<br />
|-<br />
| [http://nlp.cs.swarthmore.edu/semeval/index.php SemEval 1]<br />
| align="center" | 2007 <br />
| Prague, Czech Republic<br />
| - [http://aclweb.org/anthology-new/S/S07/ proceedings] <br /> - copy of website at [http://web.archive.org/web/20080727062358/http://nlp.cs.swarthmore.edu/semeval/index.php Internet Archive]<br />
|-<br />
| [http://www.senseval.org/senseval3 SENSEVAL 3]<br />
| align="center" | 2004<br />
| Barcelona, Spain<br />
| - [http://aclweb.org/anthology-new/W/W04/#0800 proceedings]<br />
|-<br />
| [http://www.sle.sharp.co.uk/senseval2 SENSEVAL 2]<br />
| align="center" | 2001<br />
| Toulouse, France<br />
| - main link provides links to results, data, system descriptions, task descriptions, and workshop program <br /> - copy of website at [http://web.archive.org/web/20050507011044/http://www.sle.sharp.co.uk/senseval2/ Internet Archive]<br />
|-<br />
| [http://www.itri.brighton.ac.uk/events/senseval/ARCHIVE/index.html SENSEVAL 1]<br />
| align="center" | 1998<br />
| East Sussex, UK<br />
| - papers in [http://www.springerlink.com/content/0010-4817/34/1-2/ Computers and the Humanities], subscribers or pay per view<br />
|-<br />
|}<br />
<br />
<br />
==Issues in Semantic Evaluation==<br />
<br />
This section will identify the major issues in word-sense disambiguation and other semantic evaluation issues.<br />
<br />
[[Category:SemEval Portal]]</div>Kenskihttps://aclweb.org/aclwiki/index.php?title=SemEval_Portal&diff=8229SemEval Portal2010-10-11T20:40:11Z<p>Kenski: /* Issues in Semantic Evaluation */</p>
<hr />
<div>This page serves as a community portal for everything related to Semantic Evaluation. <br />
<br />
==Semantic Evaluation Exercises==<br />
<br />
The purpose of the [http://www.senseval.org SENSEVAL] and SemEval exercises is to evaluate semantic analysis systems. The first three evaluations, Senseval-1 through Senseval-3, were focused on word sense disambiguation, each time growing in the number of languages offered in the tasks and in the number of participating teams. Beginning with the 4th workshop, SemEval-2007, the nature of the tasks evolved to include semantic analysis tasks outside of word sense disambiguation. [http://www.clres.com/siglex.html SIGLEX] is the umbrella organization for the SemEval semantic evaluations and the SENSEVAL word-sense evaluation exercises. This portal will be used to provide a comprehensive view of the issues involved in semantic evaluations. Initially, the portal provides links to the current and past exercises.<br />
<br />
* [http://www.clres.com/siglex.html SIGLEX] is the umbrella organization for the SemEval semantic evaluations and the SENSEVAL word-sense evaluation exercises.<br />
* [http://www.senseval.org/ SENSEVAL] is the home page for SENSEVAL 1-3<br />
<br />
The purpose of the [http://www.senseval.org SENSEVAL] and SemEval exercises is to evaluate semantic analysis systems. The first three evaluations, Senseval-1 through Senseval-3, were focused on word sense disambiguation, each time growing in the number of languages offered in the tasks and in the number of participating teams. Beginning with the 4th workshop, SemEval-2007, the nature of the tasks evolved to include semantic analysis tasks outside of word sense disambiguation.<br />
<br />
==Upcoming and Past Events==<br />
<br />
{| border="1" cellpadding="7" cellspacing="0" <br />
|-<br />
! Event<br />
! Year<br />
! Location<br />
! Notes<br />
|-<br />
| [[SemEval 3]]<br />
| align="center" | to be determined<br />
| to be determined<br />
| - discussion at [http://groups.google.com/group/semeval3 SemEval 3 Group]<br />
|-<br />
| [http://semeval2.fbk.eu/semeval2.php SemEval 2]<br />
| align="center" | 2010<br />
| Uppsala, Sweden<br />
| - [http://aclweb.org/anthology-new/S/S10/ proceedings]<br />
|-<br />
| [http://nlp.cs.swarthmore.edu/semeval/index.php SemEval 1]<br />
| align="center" | 2007 <br />
| Prague, Czech Republic<br />
| - [http://aclweb.org/anthology-new/S/S07/ proceedings] <br /> - copy of website at [http://web.archive.org/web/20080727062358/http://nlp.cs.swarthmore.edu/semeval/index.php Internet Archive]<br />
|-<br />
| [http://www.senseval.org/senseval3 SENSEVAL 3]<br />
| align="center" | 2004<br />
| Barcelona, Spain<br />
| - [http://aclweb.org/anthology-new/W/W04/#0800 proceedings]<br />
|-<br />
| [http://www.sle.sharp.co.uk/senseval2 SENSEVAL 2]<br />
| align="center" | 2001<br />
| Toulouse, France<br />
| - main link provides links to results, data, system descriptions, task descriptions, and workshop program <br /> - copy of website at [http://web.archive.org/web/20050507011044/http://www.sle.sharp.co.uk/senseval2/ Internet Archive]<br />
|-<br />
| [http://www.itri.brighton.ac.uk/events/senseval/ARCHIVE/index.html SENSEVAL 1]<br />
| align="center" | 1998<br />
| East Sussex, UK<br />
| - papers in [http://www.springerlink.com/content/0010-4817/34/1-2/ Computers and the Humanities], subscribers or pay per view<br />
|-<br />
|}<br />
<br />
<br />
==Issues in Semantic Evaluation==<br />
<br />
This section will identify the major issues in word-sense disambiguation and other semantic evaluation issues.<br />
<br />
[[Category:SemEval Portal]]</div>Kenskihttps://aclweb.org/aclwiki/index.php?title=SemEval_Portal&diff=8228SemEval Portal2010-10-11T20:39:03Z<p>Kenski: </p>
<hr />
<div>This page serves as a community portal for everything related to Semantic Evaluation. <br />
<br />
==Semantic Evaluation Exercises==<br />
<br />
The purpose of the [http://www.senseval.org SENSEVAL] and SemEval exercises is to evaluate semantic analysis systems. The first three evaluations, Senseval-1 through Senseval-3, were focused on word sense disambiguation, each time growing in the number of languages offered in the tasks and in the number of participating teams. Beginning with the 4th workshop, SemEval-2007, the nature of the tasks evolved to include semantic analysis tasks outside of word sense disambiguation. [http://www.clres.com/siglex.html SIGLEX] is the umbrella organization for the SemEval semantic evaluations and the SENSEVAL word-sense evaluation exercises. This portal will be used to provide a comprehensive view of the issues involved in semantic evaluations. Initially, the portal provides links to the current and past exercises.<br />
<br />
* [http://www.clres.com/siglex.html SIGLEX] is the umbrella organization for the SemEval semantic evaluations and the SENSEVAL word-sense evaluation exercises.<br />
* [http://www.senseval.org/ SENSEVAL] is the home page for SENSEVAL 1-3<br />
<br />
The purpose of the [http://www.senseval.org SENSEVAL] and SemEval exercises is to evaluate semantic analysis systems. The first three evaluations, Senseval-1 through Senseval-3, were focused on word sense disambiguation, each time growing in the number of languages offered in the tasks and in the number of participating teams. Beginning with the 4th workshop, SemEval-2007, the nature of the tasks evolved to include semantic analysis tasks outside of word sense disambiguation.<br />
<br />
==Upcoming and Past Events==<br />
<br />
{| border="1" cellpadding="7" cellspacing="0" <br />
|-<br />
! Event<br />
! Year<br />
! Location<br />
! Notes<br />
|-<br />
| [[SemEval 3]]<br />
| align="center" | to be determined<br />
| to be determined<br />
| - discussion at [http://groups.google.com/group/semeval3 SemEval 3 Group]<br />
|-<br />
| [http://semeval2.fbk.eu/semeval2.php SemEval 2]<br />
| align="center" | 2010<br />
| Uppsala, Sweden<br />
| - [http://aclweb.org/anthology-new/S/S10/ proceedings]<br />
|-<br />
| [http://nlp.cs.swarthmore.edu/semeval/index.php SemEval 1]<br />
| align="center" | 2007 <br />
| Prague, Czech Republic<br />
| - [http://aclweb.org/anthology-new/S/S07/ proceedings] <br /> - copy of website at [http://web.archive.org/web/20080727062358/http://nlp.cs.swarthmore.edu/semeval/index.php Internet Archive]<br />
|-<br />
| [http://www.senseval.org/senseval3 SENSEVAL 3]<br />
| align="center" | 2004<br />
| Barcelona, Spain<br />
| - [http://aclweb.org/anthology-new/W/W04/#0800 proceedings]<br />
|-<br />
| [http://www.sle.sharp.co.uk/senseval2 SENSEVAL 2]<br />
| align="center" | 2001<br />
| Toulouse, France<br />
| - main link provides links to results, data, system descriptions, task descriptions, and workshop program <br /> - copy of website at [http://web.archive.org/web/20050507011044/http://www.sle.sharp.co.uk/senseval2/ Internet Archive]<br />
|-<br />
| [http://www.itri.brighton.ac.uk/events/senseval/ARCHIVE/index.html SENSEVAL 1]<br />
| align="center" | 1998<br />
| East Sussex, UK<br />
| - papers in [http://www.springerlink.com/content/0010-4817/34/1-2/ Computers and the Humanities], subscribers or pay per view<br />
|-<br />
|}<br />
<br />
<br />
==Issues in Semantic Evaluation==<br />
<br />
<br />
[[Category:SemEval Portal]]</div>Kenskihttps://aclweb.org/aclwiki/index.php?title=SemEval_Portal&diff=8227SemEval Portal2010-10-11T20:35:01Z<p>Kenski: </p>
<hr />
<div>This page serves as a community portal for everything related to Semantic Evaluation. <br />
*[[Semantic Evaluation Exercises]]<br />
*[[Upcoming and Past Events]]<br />
<br />
The purpose of the [http://www.senseval.org SENSEVAL] and SemEval exercises is to evaluate semantic analysis systems. The first three evaluations, Senseval-1 through Senseval-3, were focused on word sense disambiguation, each time growing in the number of languages offered in the tasks and in the number of participating teams. Beginning with the 4th workshop, SemEval-2007, the nature of the tasks evolved to include semantic analysis tasks outside of word sense disambiguation. [http://www.clres.com/siglex.html SIGLEX] is the umbrella organization for the SemEval semantic evaluations and the SENSEVAL word-sense evaluation exercises. This portal will be used to provide a comprehensive view of the issues involved in semantic evaluations. Initially, the portal provides links to the current and past exercises.<br />
<br />
* [http://www.clres.com/siglex.html SIGLEX] is the umbrella organization for the SemEval semantic evaluations and the SENSEVAL word-sense evaluation exercises.<br />
* [http://www.senseval.org/ SENSEVAL] is the home page for SENSEVAL 1-3<br />
<br />
== Semantic Evaluation Exercises ==<br />
<br />
The purpose of the [http://www.senseval.org SENSEVAL] and SemEval exercises is to evaluate semantic analysis systems. The first three evaluations, Senseval-1 through Senseval-3, were focused on word sense disambiguation, each time growing in the number of languages offered in the tasks and in the number of participating teams. Beginning with the 4th workshop, SemEval-2007, the nature of the tasks evolved to include semantic analysis tasks outside of word sense disambiguation.<br />
<br />
== Upcoming and Past Events ==<br />
<br />
{| border="1" cellpadding="7" cellspacing="0" <br />
|-<br />
! Event<br />
! Year<br />
! Location<br />
! Notes<br />
|-<br />
| [[SemEval 3]]<br />
| align="center" | to be determined<br />
| to be determined<br />
| - discussion at [http://groups.google.com/group/semeval3 SemEval 3 Group]<br />
|-<br />
| [http://semeval2.fbk.eu/semeval2.php SemEval 2]<br />
| align="center" | 2010<br />
| Uppsala, Sweden<br />
| - [http://aclweb.org/anthology-new/S/S10/ proceedings]<br />
|-<br />
| [http://nlp.cs.swarthmore.edu/semeval/index.php SemEval 1]<br />
| align="center" | 2007 <br />
| Prague, Czech Republic<br />
| - [http://aclweb.org/anthology-new/S/S07/ proceedings] <br /> - copy of website at [http://web.archive.org/web/20080727062358/http://nlp.cs.swarthmore.edu/semeval/index.php Internet Archive]<br />
|-<br />
| [http://www.senseval.org/senseval3 SENSEVAL 3]<br />
| align="center" | 2004<br />
| Barcelona, Spain<br />
| - [http://aclweb.org/anthology-new/W/W04/#0800 proceedings]<br />
|-<br />
| [http://www.sle.sharp.co.uk/senseval2 SENSEVAL 2]<br />
| align="center" | 2001<br />
| Toulouse, France<br />
| - main link provides links to results, data, system descriptions, task descriptions, and workshop program <br /> - copy of website at [http://web.archive.org/web/20050507011044/http://www.sle.sharp.co.uk/senseval2/ Internet Archive]<br />
|-<br />
| [http://www.itri.brighton.ac.uk/events/senseval/ARCHIVE/index.html SENSEVAL 1]<br />
| align="center" | 1998<br />
| East Sussex, UK<br />
| - papers in [http://www.springerlink.com/content/0010-4817/34/1-2/ Computers and the Humanities], subscribers or pay per view<br />
|-<br />
|}<br />
<br />
<br />
== Issues in Semantic Evaluation ==<br />
<br />
<br />
[[Category:SemEval Portal]]</div>Kenskihttps://aclweb.org/aclwiki/index.php?title=SemEval_Portal&diff=8226SemEval Portal2010-10-11T20:28:19Z<p>Kenski: </p>
<hr />
<div>This page serves as a community portal for everything related to Semantic Evaluation. <br />
[[ Semantic Evaluation Exercises ]]<br />
[[ Upcoming and Past Events ]]<br />
<br />
The purpose of the [http://www.senseval.org SENSEVAL] and SemEval exercises is to evaluate semantic analysis systems. The first three evaluations, Senseval-1 through Senseval-3, were focused on word sense disambiguation, each time growing in the number of languages offered in the tasks and in the number of participating teams. Beginning with the 4th workshop, SemEval-2007, the nature of the tasks evolved to include semantic analysis tasks outside of word sense disambiguation. [http://www.clres.com/siglex.html SIGLEX] is the umbrella organization for the SemEval semantic evaluations and the SENSEVAL word-sense evaluation exercises. This portal will be used to provide a comprehensive view of the issues involved in semantic evaluations. Initially, the portal provides links to the current and past exercises.<br />
<br />
* [http://www.clres.com/siglex.html SIGLEX] is the umbrella organization for the SemEval semantic evaluations and the SENSEVAL word-sense evaluation exercises.<br />
* [http://www.senseval.org/ SENSEVAL] is the home page for SENSEVAL 1-3<br />
<br />
== Semantic Evaluation Exercises ==<br />
<br />
The purpose of the [http://www.senseval.org SENSEVAL] and SemEval exercises is to evaluate semantic analysis systems. The first three evaluations, Senseval-1 through Senseval-3, were focused on word sense disambiguation, each time growing in the number of languages offered in the tasks and in the number of participating teams. Beginning with the 4th workshop, SemEval-2007, the nature of the tasks evolved to include semantic analysis tasks outside of word sense disambiguation.<br />
<br />
== Upcoming and Past Events ==<br />
<br />
{| border="1" cellpadding="7" cellspacing="0" <br />
|-<br />
! Event<br />
! Year<br />
! Location<br />
! Notes<br />
|-<br />
| [[SemEval 3]]<br />
| align="center" | to be determined<br />
| to be determined<br />
| - discussion at [http://groups.google.com/group/semeval3 SemEval 3 Group]<br />
|-<br />
| [http://semeval2.fbk.eu/semeval2.php SemEval 2]<br />
| align="center" | 2010<br />
| Uppsala, Sweden<br />
| - [http://aclweb.org/anthology-new/S/S10/ proceedings]<br />
|-<br />
| [http://nlp.cs.swarthmore.edu/semeval/index.php SemEval 1]<br />
| align="center" | 2007 <br />
| Prague, Czech Republic<br />
| - [http://aclweb.org/anthology-new/S/S07/ proceedings] <br /> - copy of website at [http://web.archive.org/web/20080727062358/http://nlp.cs.swarthmore.edu/semeval/index.php Internet Archive]<br />
|-<br />
| [http://www.senseval.org/senseval3 SENSEVAL 3]<br />
| align="center" | 2004<br />
| Barcelona, Spain<br />
| - [http://aclweb.org/anthology-new/W/W04/#0800 proceedings]<br />
|-<br />
| [http://www.sle.sharp.co.uk/senseval2 SENSEVAL 2]<br />
| align="center" | 2001<br />
| Toulouse, France<br />
| - main link provides links to results, data, system descriptions, task descriptions, and workshop program <br /> - copy of website at [http://web.archive.org/web/20050507011044/http://www.sle.sharp.co.uk/senseval2/ Internet Archive]<br />
|-<br />
| [http://www.itri.brighton.ac.uk/events/senseval/ARCHIVE/index.html SENSEVAL 1]<br />
| align="center" | 1998<br />
| East Sussex, UK<br />
| - papers in [http://www.springerlink.com/content/0010-4817/34/1-2/ Computers and the Humanities], subscribers or pay per view<br />
|-<br />
|}<br />
<br />
<br />
<br />
[[Category:SemEval Portal]]</div>Kenskihttps://aclweb.org/aclwiki/index.php?title=SemEval_Portal&diff=8225SemEval Portal2010-10-11T20:27:46Z<p>Kenski: </p>
<hr />
<div>This page serves as a community portal for everything related to Semantic Evaluation. <br />
[[Semantic Evaluation Exercises]]<br />
[[Upcoming and Past Events]]<br />
<br />
The purpose of the [http://www.senseval.org SENSEVAL] and SemEval exercises is to evaluate semantic analysis systems. The first three evaluations, Senseval-1 through Senseval-3, were focused on word sense disambiguation, each time growing in the number of languages offered in the tasks and in the number of participating teams. Beginning with the 4th workshop, SemEval-2007, the nature of the tasks evolved to include semantic analysis tasks outside of word sense disambiguation. [http://www.clres.com/siglex.html SIGLEX] is the umbrella organization for the SemEval semantic evaluations and the SENSEVAL word-sense evaluation exercises. This portal will be used to provide a comprehensive view of the issues involved in semantic evaluations. Initially, the portal provides links to the current and past exercises.<br />
<br />
* [http://www.clres.com/siglex.html SIGLEX] is the umbrella organization for the SemEval semantic evaluations and the SENSEVAL word-sense evaluation exercises.<br />
* [http://www.senseval.org/ SENSEVAL] is the home page for SENSEVAL 1-3<br />
<br />
== Semantic Evaluation Exercises ==<br />
<br />
The purpose of the [http://www.senseval.org SENSEVAL] and SemEval exercises is to evaluate semantic analysis systems. The first three evaluations, Senseval-1 through Senseval-3, were focused on word sense disambiguation, each time growing in the number of languages offered in the tasks and in the number of participating teams. Beginning with the 4th workshop, SemEval-2007, the nature of the tasks evolved to include semantic analysis tasks outside of word sense disambiguation.<br />
<br />
== Upcoming and Past Events ==<br />
<br />
{| border="1" cellpadding="7" cellspacing="0" <br />
|-<br />
! Event<br />
! Year<br />
! Location<br />
! Notes<br />
|-<br />
| [[SemEval 3]]<br />
| align="center" | to be determined<br />
| to be determined<br />
| - discussion at [http://groups.google.com/group/semeval3 SemEval 3 Group]<br />
|-<br />
| [http://semeval2.fbk.eu/semeval2.php SemEval 2]<br />
| align="center" | 2010<br />
| Uppsala, Sweden<br />
| - [http://aclweb.org/anthology-new/S/S10/ proceedings]<br />
|-<br />
| [http://nlp.cs.swarthmore.edu/semeval/index.php SemEval 1]<br />
| align="center" | 2007 <br />
| Prague, Czech Republic<br />
| - [http://aclweb.org/anthology-new/S/S07/ proceedings] <br /> - copy of website at [http://web.archive.org/web/20080727062358/http://nlp.cs.swarthmore.edu/semeval/index.php Internet Archive]<br />
|-<br />
| [http://www.senseval.org/senseval3 SENSEVAL 3]<br />
| align="center" | 2004<br />
| Barcelona, Spain<br />
| - [http://aclweb.org/anthology-new/W/W04/#0800 proceedings]<br />
|-<br />
| [http://www.sle.sharp.co.uk/senseval2 SENSEVAL 2]<br />
| align="center" | 2001<br />
| Toulouse, France<br />
| - main link provides links to results, data, system descriptions, task descriptions, and workshop program <br /> - copy of website at [http://web.archive.org/web/20050507011044/http://www.sle.sharp.co.uk/senseval2/ Internet Archive]<br />
|-<br />
| [http://www.itri.brighton.ac.uk/events/senseval/ARCHIVE/index.html SENSEVAL 1]<br />
| align="center" | 1998<br />
| East Sussex, UK<br />
| - papers in [http://www.springerlink.com/content/0010-4817/34/1-2/ Computers and the Humanities], subscribers or pay per view<br />
|-<br />
|}<br />
<br />
<br />
<br />
[[Category:SemEval Portal]]</div>Kenskihttps://aclweb.org/aclwiki/index.php?title=SemEval_Portal&diff=8224SemEval Portal2010-10-11T20:24:36Z<p>Kenski: </p>
<hr />
<div>The purpose of the [http://www.senseval.org SENSEVAL] and SemEval exercises is to evaluate semantic analysis systems. The first three evaluations, Senseval-1 through Senseval-3, were focused on word sense disambiguation, each time growing in the number of languages offered in the tasks and in the number of participating teams. Beginning with the 4th workshop, SemEval-2007, the nature of the tasks evolved to include semantic analysis tasks outside of word sense disambiguation. [http://www.clres.com/siglex.html SIGLEX] is the umbrella organization for the SemEval semantic evaluations and the SENSEVAL word-sense evaluation exercises. This portal will be used to provide a comprehensive view of the issues involved in semantic evaluations. Initially, the portal provides links to the current and past exercises.<br />
<br />
* [http://www.clres.com/siglex.html SIGLEX] is the umbrella organization for the SemEval semantic evaluations and the SENSEVAL word-sense evaluation exercises.<br />
* [http://www.senseval.org/ SENSEVAL] is the home page for SENSEVAL 1-3<br />
<br />
== Semantic Evaluation Exercises ==<br />
<br />
The purpose of the [http://www.senseval.org SENSEVAL] and SemEval exercises is to evaluate semantic analysis systems. The first three evaluations, Senseval-1 through Senseval-3, were focused on word sense disambiguation, each time growing in the number of languages offered in the tasks and in the number of participating teams. Beginning with the 4th workshop, SemEval-2007, the nature of the tasks evolved to include semantic analysis tasks outside of word sense disambiguation.<br />
<br />
== Upcoming and Past Events ==<br />
<br />
{| border="1" cellpadding="7" cellspacing="0" <br />
|-<br />
! Event<br />
! Year<br />
! Location<br />
! Notes<br />
|-<br />
| [[SemEval 3]]<br />
| align="center" | to be determined<br />
| to be determined<br />
| - discussion at [http://groups.google.com/group/semeval3 SemEval 3 Group]<br />
|-<br />
| [http://semeval2.fbk.eu/semeval2.php SemEval 2]<br />
| align="center" | 2010<br />
| Uppsala, Sweden<br />
| - [http://aclweb.org/anthology-new/S/S10/ proceedings]<br />
|-<br />
| [http://nlp.cs.swarthmore.edu/semeval/index.php SemEval 1]<br />
| align="center" | 2007 <br />
| Prague, Czech Republic<br />
| - [http://aclweb.org/anthology-new/S/S07/ proceedings] <br /> - copy of website at [http://web.archive.org/web/20080727062358/http://nlp.cs.swarthmore.edu/semeval/index.php Internet Archive]<br />
|-<br />
| [http://www.senseval.org/senseval3 SENSEVAL 3]<br />
| align="center" | 2004<br />
| Barcelona, Spain<br />
| - [http://aclweb.org/anthology-new/W/W04/#0800 proceedings]<br />
|-<br />
| [http://www.sle.sharp.co.uk/senseval2 SENSEVAL 2]<br />
| align="center" | 2001<br />
| Toulouse, France<br />
| - main link provides links to results, data, system descriptions, task descriptions, and workshop program <br /> - copy of website at [http://web.archive.org/web/20050507011044/http://www.sle.sharp.co.uk/senseval2/ Internet Archive]<br />
|-<br />
| [http://www.itri.brighton.ac.uk/events/senseval/ARCHIVE/index.html SENSEVAL 1]<br />
| align="center" | 1998<br />
| East Sussex, UK<br />
| - papers in [http://www.springerlink.com/content/0010-4817/34/1-2/ Computers and the Humanities], subscribers or pay per view<br />
|-<br />
|}<br />
<br />
<br />
<br />
[[Category:SemEval Portal]]</div>Kenskihttps://aclweb.org/aclwiki/index.php?title=SemEval_Portal&diff=8223SemEval Portal2010-10-11T20:21:59Z<p>Kenski: </p>
<hr />
<div>The purpose of the [http://www.senseval.org SENSEVAL] and SemEval exercises is to evaluate semantic analysis systems. The first three evaluations, Senseval-1 through Senseval-3, were focused on word sense disambiguation, each time growing in the number of languages offered in the tasks and in the number of participating teams. Beginning with the 4th workshop, SemEval-2007, the nature of the tasks evolved to include semantic analysis tasks outside of word sense disambiguation. [http://www.clres.com/siglex.html SIGLEX] is the umbrella organization for the SemEval semantic evaluations and the SENSEVAL word-sense evaluation exercises. This portal will be used to provide a comprehensive view of the issues involved in semantic evaluations. Initially, the portal provides links to the current and past exercises.<br />
<br />
* [http://www.clres.com/siglex.html SIGLEX] is the umbrella organization for the SemEval semantic evaluations and the SENSEVAL word-sense evaluation exercises.<br />
* [http://www.senseval.org/ SENSEVAL] is the home page for SENSEVAL 1-3<br />
<br />
== Upcoming and Past Events ==<br />
{| border="1" cellpadding="7" cellspacing="0" <br />
|-<br />
! Event<br />
! Year<br />
! Location<br />
! Notes<br />
|-<br />
| [[SemEval 3]]<br />
| align="center" | to be determined<br />
| to be determined<br />
| - discussion at [http://groups.google.com/group/semeval3 SemEval 3 Group]<br />
|-<br />
| [http://semeval2.fbk.eu/semeval2.php SemEval 2]<br />
| align="center" | 2010<br />
| Uppsala, Sweden<br />
| - [http://aclweb.org/anthology-new/S/S10/ proceedings]<br />
|-<br />
| [http://nlp.cs.swarthmore.edu/semeval/index.php SemEval 1]<br />
| align="center" | 2007 <br />
| Prague, Czech Republic<br />
| - [http://aclweb.org/anthology-new/S/S07/ proceedings] <br /> - copy of website at [http://web.archive.org/web/20080727062358/http://nlp.cs.swarthmore.edu/semeval/index.php Internet Archive]<br />
|-<br />
| [http://www.senseval.org/senseval3 SENSEVAL 3]<br />
| align="center" | 2004<br />
| Barcelona, Spain<br />
| - [http://aclweb.org/anthology-new/W/W04/#0800 proceedings]<br />
|-<br />
| [http://www.sle.sharp.co.uk/senseval2 SENSEVAL 2]<br />
| align="center" | 2001<br />
| Toulouse, France<br />
| - main link provides links to results, data, system descriptions, task descriptions, and workshop program <br /> - copy of website at [http://web.archive.org/web/20050507011044/http://www.sle.sharp.co.uk/senseval2/ Internet Archive]<br />
|-<br />
| [http://www.itri.brighton.ac.uk/events/senseval/ARCHIVE/index.html SENSEVAL 1]<br />
| align="center" | 1998<br />
| East Sussex, UK<br />
| - papers in [http://www.springerlink.com/content/0010-4817/34/1-2/ Computers and the Humanities], subscribers or pay per view<br />
|-<br />
|}<br />
<br />
<br />
<br />
[[Category:SemEval Portal]]</div>Kenskihttps://aclweb.org/aclwiki/index.php?title=Competitions_and_Challenges&diff=8198Competitions and Challenges2010-10-01T21:27:50Z<p>Kenski: </p>
<hr />
<div>The competitions and challenges below are categorized by year. In cases in where an event begins in one year and ends in a later year,<br />
it is placed under the year in which it was or will be completed.<br />
<br />
<br />
== 2012 ==<br />
<!-- Please keep this list in alphabetical order --><br />
* [http://aclweb.org/aclwiki/index.php?title=SemEval_3 SemEval 3 (in planning, with tentative schedule)]<br />
<br />
== 2010 ==<br />
<!-- Please keep this list in alphabetical order --><br />
* [http://semeval2.fbk.eu/semeval2.php SemEval 2010]<br />
* [http://nlp.uned.es/weps/weps-3/ WePS 3: searching information about entities in the Web]<br />
<br />
== 2009 ==<br />
<!-- Please keep this list in alphabetical order --><br />
* [http://evalita.itc.it/ EVALITA 2009] - Evaluation of NLP Tools for Italian<br />
* [http://www.nist.gov/tac/2009/ TAC 2009] Text Analysis Conference<br />
* [http://nlp.uned.es/weps/weps-2/ WePS 2: Second Web People Search Evaluation Workshop]<br />
<br />
== 2008 ==<br />
<!-- Please keep this list in alphabetical order --><br />
* [http://www.itl.nist.gov./iad/894.01/tests/ace/2008/ ACE 2008] - Automatic Content Extraction (task: entity/relation detection and recognition)<br />
* [http://clef.isti.cnr.it/2008.html CLEF 2008] - Cross Language Evaluation Forum<br />
* [http://ltrc.iiit.ac.in/ner-ssea-08 NER-SSEAL 2008] - IJCNLP Workshop on NER for South and South East Asian Languages<br />
* [http://www.nist.gov/tac/tracks/2008/ TAC 2008] - Text Analysis Conference<br />
<br />
== 2007 ==<br />
<!-- Please keep this list in alphabetical order --><br />
<br />
* [http://www.nist.gov/speech/tests/ace/ace07/ ACE 2007] - Automatic Content Extraction 2007<br />
* [http://cleaneval.sigwac.org.uk/ CLEANEVAL 2007] - Cleaning pages for Web corpus creation<br />
* [http://www.clef-campaign.org/ CLEF 2007] - Cross Language Evaluation Forum<br />
* [http://duc.nist.gov/duc2007/call.html DUC-2007] - Document Understanding Conference (task: [[Text Summarization]])<br />
* [http://evalita.itc.it/ EVALITA 2007] - Evaluation of NLP Tools for Italian <br />
* [http://www.computationalmedicine.org/challenge/index.php Medical NLP Challenge] - The Computational Medicine Center's 2007 Medical Natural Language Processing Challenge<br />
* [http://www.cis.hut.fi/morphochallenge2007/ Morpho Challenge 2007] - unsupervised segmentation of words into morphemes<br />
* [http://www.pascal-network.org/Challenges/RTE3 RTE-3] - Third Recognising Textual Entailment Challenge ([[Recognizing Textual Entailment|RTE-3 Resources Pool]])<br />
* [http://nlp.cs.swarthmore.edu/semeval/index.shtml SemEval-2007] - 4th International Workshop on Semantic Evaluations<br />
* [http://www.cs.utk.edu/tmw07/ SIAM Text Mining Competition] - document classification<br />
* [http://challenge.spock.com Spock Challenge] - (task: name discrimination/disambiguation in Web text)<br />
* [http://trec.nist.gov/ TREC 2007] Text REtrieval Conference<br />
<br />
== 2006 == <br />
<!-- Please keep this list in alphabetical order --><br />
<br />
* [http://www.clef-campaign.org/ CLEF 2006] - Cross Language Evaluation Forum<br />
* [http://duc.nist.gov/duc2006/call.html DUC-2006] - Document Understanding Conference (task: [[Text Summarization]])<br />
* [http://www.pascal-network.org/Challenges/RTE2/ RTE-2] - Second Recognising Textual Entailment Challenge<br />
* [http://trec.nist.gov/ TREC 2006] Text REtrieval Conference<br />
<br />
== 2005 ==<br />
<!-- Please keep this list in alphabetical order --><br />
<br />
* [http://www.itl.nist.gov./iad/894.01/tests/ace/2005/ ACE 2005] - Automatic Content Extraction<br />
* [http://www.clef-campaign.org/ CLEF 2005] - Cross Language Evaluation Forum<br />
* [http://duc.nist.gov/duc2005/call.html DUC-2005] - Document Understanding Conference (task: [[Text Summarization]])<br />
* [http://www.cis.hut.fi/morphochallenge2005/ Morpho Challenge 2005] - unsupervised segmentation of words into morphemes<br />
* [http://www.pascal-network.org/Challenges/RTE/ RTE-1] - Recognising Textual Entailment Challenge<br />
* [http://trec.nist.gov/ TREC 2005] Text REtrieval Conference<br />
<br />
== 2004 ==<br />
<!-- Please keep this list in alphabetical order --><br />
<br />
* [http://www.itl.nist.gov./iad/894.01/tests/ace/2004/ ACE 2004] - Automatic Content Extraction<br />
* [http://www.clef-campaign.org/ CLEF 2004] - Cross Language Evaluation Forum<br />
* [http://duc.nist.gov/duc2004/call.html DUC-2004] - Document Understanding Conference (task: [[Text Summarization]])<br />
* [http://www.senseval.org/ Senseval 3] - evaluation exercises for [[Word Sense Disambiguation]]<br />
* [http://trec.nist.gov/ TREC 2004] Text REtrieval Conference<br />
<br />
== 2003 ==<br />
<!-- Please keep this list in alphabetical order --><br />
* [http://www.itl.nist.gov./iad/894.01/tests/ace/2003/ ACE 2003] - Automatic Content Extraction<br />
* [http://duc.nist.gov/duc2003/call.html DUC-2003] - Document Understanding Conference (task: [[Text Summarization]])<br />
<br />
== 2002 ==<br />
<!-- Please keep this list in alphabetical order --><br />
* [http://duc.nist.gov/duc2002/call.html DUC-2002] - Document Understanding Conference (task: [[Text Summarization]])</div>Kenskihttps://aclweb.org/aclwiki/index.php?title=Competitions_and_Challenges&diff=8197Competitions and Challenges2010-10-01T21:26:58Z<p>Kenski: </p>
<hr />
<div>The competitions and challenges below are categorized by year. In cases in where an event begins in one year and ends in a later year,<br />
it is placed under the year in which it was or will be completed.<br />
<br />
<br />
== 2012 ==<br />
<!-- Please keep this list in alphabetical order --><br />
* [http://aclweb.org/aclwiki/index.php?title=SemEval_3 SemEval 3 (tentative schedule]<br />
<br />
== 2010 ==<br />
<!-- Please keep this list in alphabetical order --><br />
* [http://semeval2.fbk.eu/semeval2.php SemEval 2010]<br />
* [http://nlp.uned.es/weps/weps-3/ WePS 3: searching information about entities in the Web]<br />
<br />
== 2009 ==<br />
<!-- Please keep this list in alphabetical order --><br />
* [http://evalita.itc.it/ EVALITA 2009] - Evaluation of NLP Tools for Italian<br />
* [http://www.nist.gov/tac/2009/ TAC 2009] Text Analysis Conference<br />
* [http://nlp.uned.es/weps/weps-2/ WePS 2: Second Web People Search Evaluation Workshop]<br />
<br />
== 2008 ==<br />
<!-- Please keep this list in alphabetical order --><br />
* [http://www.itl.nist.gov./iad/894.01/tests/ace/2008/ ACE 2008] - Automatic Content Extraction (task: entity/relation detection and recognition)<br />
* [http://clef.isti.cnr.it/2008.html CLEF 2008] - Cross Language Evaluation Forum<br />
* [http://ltrc.iiit.ac.in/ner-ssea-08 NER-SSEAL 2008] - IJCNLP Workshop on NER for South and South East Asian Languages<br />
* [http://www.nist.gov/tac/tracks/2008/ TAC 2008] - Text Analysis Conference<br />
<br />
== 2007 ==<br />
<!-- Please keep this list in alphabetical order --><br />
<br />
* [http://www.nist.gov/speech/tests/ace/ace07/ ACE 2007] - Automatic Content Extraction 2007<br />
* [http://cleaneval.sigwac.org.uk/ CLEANEVAL 2007] - Cleaning pages for Web corpus creation<br />
* [http://www.clef-campaign.org/ CLEF 2007] - Cross Language Evaluation Forum<br />
* [http://duc.nist.gov/duc2007/call.html DUC-2007] - Document Understanding Conference (task: [[Text Summarization]])<br />
* [http://evalita.itc.it/ EVALITA 2007] - Evaluation of NLP Tools for Italian <br />
* [http://www.computationalmedicine.org/challenge/index.php Medical NLP Challenge] - The Computational Medicine Center's 2007 Medical Natural Language Processing Challenge<br />
* [http://www.cis.hut.fi/morphochallenge2007/ Morpho Challenge 2007] - unsupervised segmentation of words into morphemes<br />
* [http://www.pascal-network.org/Challenges/RTE3 RTE-3] - Third Recognising Textual Entailment Challenge ([[Recognizing Textual Entailment|RTE-3 Resources Pool]])<br />
* [http://nlp.cs.swarthmore.edu/semeval/index.shtml SemEval-2007] - 4th International Workshop on Semantic Evaluations<br />
* [http://www.cs.utk.edu/tmw07/ SIAM Text Mining Competition] - document classification<br />
* [http://challenge.spock.com Spock Challenge] - (task: name discrimination/disambiguation in Web text)<br />
* [http://trec.nist.gov/ TREC 2007] Text REtrieval Conference<br />
<br />
== 2006 == <br />
<!-- Please keep this list in alphabetical order --><br />
<br />
* [http://www.clef-campaign.org/ CLEF 2006] - Cross Language Evaluation Forum<br />
* [http://duc.nist.gov/duc2006/call.html DUC-2006] - Document Understanding Conference (task: [[Text Summarization]])<br />
* [http://www.pascal-network.org/Challenges/RTE2/ RTE-2] - Second Recognising Textual Entailment Challenge<br />
* [http://trec.nist.gov/ TREC 2006] Text REtrieval Conference<br />
<br />
== 2005 ==<br />
<!-- Please keep this list in alphabetical order --><br />
<br />
* [http://www.itl.nist.gov./iad/894.01/tests/ace/2005/ ACE 2005] - Automatic Content Extraction<br />
* [http://www.clef-campaign.org/ CLEF 2005] - Cross Language Evaluation Forum<br />
* [http://duc.nist.gov/duc2005/call.html DUC-2005] - Document Understanding Conference (task: [[Text Summarization]])<br />
* [http://www.cis.hut.fi/morphochallenge2005/ Morpho Challenge 2005] - unsupervised segmentation of words into morphemes<br />
* [http://www.pascal-network.org/Challenges/RTE/ RTE-1] - Recognising Textual Entailment Challenge<br />
* [http://trec.nist.gov/ TREC 2005] Text REtrieval Conference<br />
<br />
== 2004 ==<br />
<!-- Please keep this list in alphabetical order --><br />
<br />
* [http://www.itl.nist.gov./iad/894.01/tests/ace/2004/ ACE 2004] - Automatic Content Extraction<br />
* [http://www.clef-campaign.org/ CLEF 2004] - Cross Language Evaluation Forum<br />
* [http://duc.nist.gov/duc2004/call.html DUC-2004] - Document Understanding Conference (task: [[Text Summarization]])<br />
* [http://www.senseval.org/ Senseval 3] - evaluation exercises for [[Word Sense Disambiguation]]<br />
* [http://trec.nist.gov/ TREC 2004] Text REtrieval Conference<br />
<br />
== 2003 ==<br />
<!-- Please keep this list in alphabetical order --><br />
* [http://www.itl.nist.gov./iad/894.01/tests/ace/2003/ ACE 2003] - Automatic Content Extraction<br />
* [http://duc.nist.gov/duc2003/call.html DUC-2003] - Document Understanding Conference (task: [[Text Summarization]])<br />
<br />
== 2002 ==<br />
<!-- Please keep this list in alphabetical order --><br />
* [http://duc.nist.gov/duc2002/call.html DUC-2002] - Document Understanding Conference (task: [[Text Summarization]])</div>Kenskihttps://aclweb.org/aclwiki/index.php?title=Competitions_and_Challenges&diff=8196Competitions and Challenges2010-10-01T21:26:39Z<p>Kenski: </p>
<hr />
<div>The competitions and challenges below are categorized by year. In cases in where an event begins in one year and ends in a later year,<br />
it is placed under the year in which it was or will be completed.<br />
<br />
<br />
== 2012 ==<br />
<!-- Please keep this list in alphabetical order --><br />
* [http://aclweb.org/aclwiki/index.php?title=SemEval_3 SemEval 3 (tentative schedule]<br />
* [http://nlp.uned.es/weps/weps-3/ WePS 3: searching information about entities in the Web]<br />
<br />
== 2010 ==<br />
<!-- Please keep this list in alphabetical order --><br />
* [http://semeval2.fbk.eu/semeval2.php SemEval 2010]<br />
* [http://nlp.uned.es/weps/weps-3/ WePS 3: searching information about entities in the Web]<br />
<br />
== 2009 ==<br />
<!-- Please keep this list in alphabetical order --><br />
* [http://evalita.itc.it/ EVALITA 2009] - Evaluation of NLP Tools for Italian<br />
* [http://www.nist.gov/tac/2009/ TAC 2009] Text Analysis Conference<br />
* [http://nlp.uned.es/weps/weps-2/ WePS 2: Second Web People Search Evaluation Workshop]<br />
<br />
== 2008 ==<br />
<!-- Please keep this list in alphabetical order --><br />
* [http://www.itl.nist.gov./iad/894.01/tests/ace/2008/ ACE 2008] - Automatic Content Extraction (task: entity/relation detection and recognition)<br />
* [http://clef.isti.cnr.it/2008.html CLEF 2008] - Cross Language Evaluation Forum<br />
* [http://ltrc.iiit.ac.in/ner-ssea-08 NER-SSEAL 2008] - IJCNLP Workshop on NER for South and South East Asian Languages<br />
* [http://www.nist.gov/tac/tracks/2008/ TAC 2008] - Text Analysis Conference<br />
<br />
== 2007 ==<br />
<!-- Please keep this list in alphabetical order --><br />
<br />
* [http://www.nist.gov/speech/tests/ace/ace07/ ACE 2007] - Automatic Content Extraction 2007<br />
* [http://cleaneval.sigwac.org.uk/ CLEANEVAL 2007] - Cleaning pages for Web corpus creation<br />
* [http://www.clef-campaign.org/ CLEF 2007] - Cross Language Evaluation Forum<br />
* [http://duc.nist.gov/duc2007/call.html DUC-2007] - Document Understanding Conference (task: [[Text Summarization]])<br />
* [http://evalita.itc.it/ EVALITA 2007] - Evaluation of NLP Tools for Italian <br />
* [http://www.computationalmedicine.org/challenge/index.php Medical NLP Challenge] - The Computational Medicine Center's 2007 Medical Natural Language Processing Challenge<br />
* [http://www.cis.hut.fi/morphochallenge2007/ Morpho Challenge 2007] - unsupervised segmentation of words into morphemes<br />
* [http://www.pascal-network.org/Challenges/RTE3 RTE-3] - Third Recognising Textual Entailment Challenge ([[Recognizing Textual Entailment|RTE-3 Resources Pool]])<br />
* [http://nlp.cs.swarthmore.edu/semeval/index.shtml SemEval-2007] - 4th International Workshop on Semantic Evaluations<br />
* [http://www.cs.utk.edu/tmw07/ SIAM Text Mining Competition] - document classification<br />
* [http://challenge.spock.com Spock Challenge] - (task: name discrimination/disambiguation in Web text)<br />
* [http://trec.nist.gov/ TREC 2007] Text REtrieval Conference<br />
<br />
== 2006 == <br />
<!-- Please keep this list in alphabetical order --><br />
<br />
* [http://www.clef-campaign.org/ CLEF 2006] - Cross Language Evaluation Forum<br />
* [http://duc.nist.gov/duc2006/call.html DUC-2006] - Document Understanding Conference (task: [[Text Summarization]])<br />
* [http://www.pascal-network.org/Challenges/RTE2/ RTE-2] - Second Recognising Textual Entailment Challenge<br />
* [http://trec.nist.gov/ TREC 2006] Text REtrieval Conference<br />
<br />
== 2005 ==<br />
<!-- Please keep this list in alphabetical order --><br />
<br />
* [http://www.itl.nist.gov./iad/894.01/tests/ace/2005/ ACE 2005] - Automatic Content Extraction<br />
* [http://www.clef-campaign.org/ CLEF 2005] - Cross Language Evaluation Forum<br />
* [http://duc.nist.gov/duc2005/call.html DUC-2005] - Document Understanding Conference (task: [[Text Summarization]])<br />
* [http://www.cis.hut.fi/morphochallenge2005/ Morpho Challenge 2005] - unsupervised segmentation of words into morphemes<br />
* [http://www.pascal-network.org/Challenges/RTE/ RTE-1] - Recognising Textual Entailment Challenge<br />
* [http://trec.nist.gov/ TREC 2005] Text REtrieval Conference<br />
<br />
== 2004 ==<br />
<!-- Please keep this list in alphabetical order --><br />
<br />
* [http://www.itl.nist.gov./iad/894.01/tests/ace/2004/ ACE 2004] - Automatic Content Extraction<br />
* [http://www.clef-campaign.org/ CLEF 2004] - Cross Language Evaluation Forum<br />
* [http://duc.nist.gov/duc2004/call.html DUC-2004] - Document Understanding Conference (task: [[Text Summarization]])<br />
* [http://www.senseval.org/ Senseval 3] - evaluation exercises for [[Word Sense Disambiguation]]<br />
* [http://trec.nist.gov/ TREC 2004] Text REtrieval Conference<br />
<br />
== 2003 ==<br />
<!-- Please keep this list in alphabetical order --><br />
* [http://www.itl.nist.gov./iad/894.01/tests/ace/2003/ ACE 2003] - Automatic Content Extraction<br />
* [http://duc.nist.gov/duc2003/call.html DUC-2003] - Document Understanding Conference (task: [[Text Summarization]])<br />
<br />
== 2002 ==<br />
<!-- Please keep this list in alphabetical order --><br />
* [http://duc.nist.gov/duc2002/call.html DUC-2002] - Document Understanding Conference (task: [[Text Summarization]])</div>Kenskihttps://aclweb.org/aclwiki/index.php?title=SemEval_Portal&diff=8183SemEval Portal2010-09-25T20:22:50Z<p>Kenski: </p>
<hr />
<div>The purpose of the [http://www.senseval.org SemEval/Senseval] (http://www.senseval.org) exercises is to evaluate semantic analysis systems. The first three evaluations, Senseval-1 through Senseval-3, were focused on word sense disambiguation, each time growing in the number of languages offered in the tasks and in the number of participating teams. Beginning with the 4th workshop, SemEval-2007, the nature of the tasks evolved to include semantic analysis tasks outside of word sense disambiguation. [http://www.clres.com/siglex.html SIGLEX] is the umbrella organization for the SemEval semantic evaluations and the SENSEVAL word-sense evaluation exercises. This portal will be used to provide a comprehensive view of the issues involved in semantic evaluations. Initially, the portal provides links to the current and past exercises.<br />
<br />
::{| border="1" cellpadding="7" cellspacing="0" <br />
|-<br />
! Event<br />
! Year<br />
! Location<br />
! Notes<br />
|-<br />
| [[SemEval 3]]<br />
| align="center" | to be determined<br />
| to be determined<br />
| - discussion at [http://groups.google.com/group/semeval3 SemEval 3 Group]<br />
|-<br />
| [http://semeval2.fbk.eu/semeval2.php SemEval 2]<br />
| align="center" | 2010<br />
| Uppsala, Sweden<br />
| [http://aclweb.org/anthology-new/S/S10/ Proceedings]<br />
|-<br />
| [http://nlp.cs.swarthmore.edu/semeval/index.php SemEval 1]<br />
| align="center" | 2007 <br />
| Prague, Czech Republic<br />
| [http://aclweb.org/anthology-new/S/S07/ Proceedings] - copy of website at [http://web.archive.org/web/20080727062358/http://nlp.cs.swarthmore.edu/semeval/index.php Internet Archive]<br />
|-<br />
| [http://www.senseval.org/ SENSEVAL 1-3]<br />
| align="center" | 1998-2004<br />
| <br />
| Umbrella site<br />
|-<br />
| [http://www.senseval.org/senseval3 SENSEVAL 3]<br />
| align="center" | 2004<br />
| Barcelona, Spain<br />
| [http://aclweb.org/anthology-new/W/W04/#0800 Proceedings]<br />
|-<br />
| [http://www.sle.sharp.co.uk/senseval2 SENSEVAL 2]<br />
| align="center" | 2001<br />
| Toulouse, France<br />
| Main link provides links to Results, Data, System Descriptions, Task Descriptions, and Workshop Program - copy of website at [http://web.archive.org/web/20050507011044/http://www.sle.sharp.co.uk/senseval2/ Internet Archive]<br />
|-<br />
| [http://www.itri.brighton.ac.uk/events/senseval/ARCHIVE/index.html SENSEVAL 1]<br />
| align="center" | 1998<br />
| East Sussex, UK<br />
| [http://www.springerlink.com/content/0010-4817/34/1-2/ Computers and the Humanities: Subscribers or pay per view]<br />
|-<br />
|}<br />
<br />
<br />
[http://www.clres.com/siglex.html SIGLEX] is the umbrella organization for the SemEval semantic evaluations and the SENSEVAL word-sense evaluation exercises.<br />
<br />
<br />
[[Category:SemEval Portal]]</div>Kenskihttps://aclweb.org/aclwiki/index.php?title=SemEval_Portal&diff=8182SemEval Portal2010-09-25T20:16:55Z<p>Kenski: </p>
<hr />
<div>The purpose of the [http://www.senseval.org SemEval/Senseval] (http://www.senseval.org) exercises is to evaluate semantic analysis systems. The first three evaluations, Senseval-1 through Senseval-3, were focused on word sense disambiguation, each time growing in the number of languages offered in the tasks and in the number of participating teams. Beginning with the 4th workshop, SemEval-2007, the nature of the tasks evolved to include semantic analysis tasks outside of word sense disambiguation.<br />
<br />
::{| border="1" cellpadding="7" cellspacing="0" <br />
|-<br />
! Event<br />
! Year<br />
! Location<br />
! Notes<br />
|-<br />
| [[SemEval 3]]<br />
| align="center" | to be determined<br />
| to be determined<br />
| - discussion at [http://groups.google.com/group/semeval3 SemEval 3 Group]<br />
|-<br />
| [http://semeval2.fbk.eu/semeval2.php SemEval 2]<br />
| align="center" | 2010<br />
| Uppsala, Sweden<br />
| [http://aclweb.org/anthology-new/S/S10/ Proceedings]<br />
|-<br />
| [http://nlp.cs.swarthmore.edu/semeval/index.php SemEval 1]<br />
| align="center" | 2007 <br />
| Prague, Czech Republic<br />
| [http://aclweb.org/anthology-new/S/S07/ Proceedings] - copy of website at [http://web.archive.org/web/20080727062358/http://nlp.cs.swarthmore.edu/semeval/index.php Internet Archive]<br />
|-<br />
| [http://www.senseval.org/ SENSEVAL 1-3]<br />
| align="center" | 1998-2004<br />
| <br />
| Umbrella site<br />
|-<br />
| [http://www.senseval.org/senseval3 SENSEVAL 3]<br />
| align="center" | 2004<br />
| Barcelona, Spain<br />
| [http://aclweb.org/anthology-new/W/W04/#0800 Proceedings]<br />
|-<br />
| [http://www.sle.sharp.co.uk/senseval2 SENSEVAL 2]<br />
| align="center" | 2001<br />
| Toulouse, France<br />
| Main link provides links to Results, Data, System Descriptions, Task Descriptions, and Workshop Program - copy of website at [http://web.archive.org/web/20050507011044/http://www.sle.sharp.co.uk/senseval2/ Internet Archive]<br />
|-<br />
| [http://www.itri.brighton.ac.uk/events/senseval/ARCHIVE/index.html SENSEVAL 1]<br />
| align="center" | 1998<br />
| East Sussex, UK<br />
| [http://www.springerlink.com/content/0010-4817/34/1-2/ Computers and the Humanities: Subscribers or pay per view]<br />
|-<br />
|}<br />
<br />
<br />
[http://www.clres.com/siglex.html SIGLEX] is the umbrella organization for the SemEval semantic evaluations and the SENSEVAL word-sense evaluation exercises.<br />
<br />
<br />
[[Category:SemEval Portal]]</div>Kenskihttps://aclweb.org/aclwiki/index.php?title=SemEval_Portal&diff=8181SemEval Portal2010-09-25T20:07:02Z<p>Kenski: </p>
<hr />
<div>::{| border="1" cellpadding="7" cellspacing="0" <br />
|-<br />
! Event<br />
! Year<br />
! Location<br />
! Notes<br />
|-<br />
| [[SemEval 3]]<br />
| align="center" | to be determined<br />
| to be determined<br />
| - discussion at [http://groups.google.com/group/semeval3 SemEval 3 Group]<br />
|-<br />
| [http://semeval2.fbk.eu/semeval2.php SemEval 2]<br />
| align="center" | 2010<br />
| Uppsala, Sweden<br />
| [http://aclweb.org/anthology-new/S/S10/ Proceedings]<br />
|-<br />
| [http://nlp.cs.swarthmore.edu/semeval/index.php SemEval 1]<br />
| align="center" | 2007 <br />
| Prague, Czech Republic<br />
| [http://aclweb.org/anthology-new/S/S07/ Proceedings] - copy of website at [http://web.archive.org/web/20080727062358/http://nlp.cs.swarthmore.edu/semeval/index.php Internet Archive]<br />
|-<br />
| [http://www.senseval.org/ SENSEVAL 1-3]<br />
| align="center" | 1998-2004<br />
| <br />
| Umbrella site<br />
|-<br />
| [http://www.senseval.org/senseval3 SENSEVAL 3]<br />
| align="center" | 2004<br />
| Barcelona, Spain<br />
| [http://aclweb.org/anthology-new/W/W04/#0800 Proceedings]<br />
|-<br />
| [http://www.sle.sharp.co.uk/senseval2 SENSEVAL 2]<br />
| align="center" | 2001<br />
| Toulouse, France<br />
| Main link provides links to Results, Data, System Descriptions, Task Descriptions, and Workshop Program - copy of website at [http://web.archive.org/web/20050507011044/http://www.sle.sharp.co.uk/senseval2/ Internet Archive]<br />
|-<br />
| [http://www.itri.brighton.ac.uk/events/senseval/ARCHIVE/index.html SENSEVAL 1]<br />
| align="center" | 1998<br />
| East Sussex, UK<br />
| [http://www.springerlink.com/content/0010-4817/34/1-2/ Computers and the Humanities: Subscribers or pay per view]<br />
|-<br />
|}<br />
<br />
<br />
[http://www.clres.com/siglex.html SIGLEX] is the umbrella organization for the SemEval semantic evaluations and the SENSEVAL word-sense evaluation exercises.<br />
<br />
<br />
[[Category:SemEval Portal]]</div>Kenskihttps://aclweb.org/aclwiki/index.php?title=SemEval_Portal&diff=8180SemEval Portal2010-09-25T19:57:08Z<p>Kenski: </p>
<hr />
<div>::{| border="1" cellpadding="7" cellspacing="0" <br />
|-<br />
! Event<br />
! Year<br />
! Location<br />
! Notes<br />
|-<br />
| [[SemEval 3]]<br />
| align="center" | to be determined<br />
| to be determined<br />
| - discussion at [http://groups.google.com/group/semeval3 SemEval 3 Group]<br />
|-<br />
| [http://semeval2.fbk.eu/semeval2.php SemEval 2]<br />
| align="center" | 2010<br />
| Uppsala, Sweden<br />
| [http://aclweb.org/anthology-new/S/S10/ Proceedings]<br />
|-<br />
| [http://nlp.cs.swarthmore.edu/semeval/index.php SemEval 1]<br />
| align="center" | 2007 <br />
| Prague, Czech Republic<br />
| [http://aclweb.org/anthology-new/S/S07/ Proceedings] - copy of website at [http://web.archive.org/web/20080727062358/http://nlp.cs.swarthmore.edu/semeval/index.php Internet Archive]<br />
|-<br />
| [http://www.senseval.org/ SENSEVAL 1-3]<br />
| align="center" | 1998-2004<br />
| <br />
| Umbrella site<br />
|-<br />
| [http://www.senseval.org/senseval3 SENSEVAL 3]<br />
| align="center" | 2004<br />
| Barcelona, Spain<br />
| [http://aclweb.org/anthology-new/W/W04/#0800 Proceedings]<br />
|-<br />
| SENSEVAL 2<br />
| align="center" | 2001<br />
| Toulouse, France<br />
| - copy of website at [http://web.archive.org/web/20050507011044/http://www.sle.sharp.co.uk/senseval2/ Internet Archive]<br />
|-<br />
| [http://www.itri.brighton.ac.uk/events/senseval/ARCHIVE/index.html SENSEVAL 1]<br />
| align="center" | 1998<br />
| East Sussex, UK<br />
| [http://www.springerlink.com/content/0010-4817/34/1-2/ Computers and the Humanities: Subscribers or pay per view]<br />
|-<br />
|}<br />
<br />
<br />
[http://www.clres.com/siglex.html SIGLEX] is the umbrella organization for the SemEval semantic evaluations and the SENSEVAL word-sense evaluation exercises.<br />
<br />
<br />
[[Category:SemEval Portal]]</div>Kenskihttps://aclweb.org/aclwiki/index.php?title=SemEval_Portal&diff=8179SemEval Portal2010-09-25T19:49:18Z<p>Kenski: </p>
<hr />
<div>::{| border="1" cellpadding="7" cellspacing="0" <br />
|-<br />
! Event<br />
! Year<br />
! Location<br />
! Notes<br />
|-<br />
| [[SemEval 3]]<br />
| align="center" | to be determined<br />
| to be determined<br />
| - discussion at [http://groups.google.com/group/semeval3 SemEval 3 Group]<br />
|-<br />
| [http://semeval2.fbk.eu/semeval2.php SemEval 2]<br />
| align="center" | 2010<br />
| Uppsala, Sweden<br />
| [http://aclweb.org/anthology-new/S/S10/ Proceedings]<br />
|-<br />
| [http://nlp.cs.swarthmore.edu/semeval/index.php SemEval 1]<br />
| align="center" | 2007 <br />
| Prague, Czech Republic<br />
| [http://aclweb.org/anthology-new/S/S07/ Proceedings] - copy of website at [http://web.archive.org/web/20080727062358/http://nlp.cs.swarthmore.edu/semeval/index.php Internet Archive]<br />
|-<br />
| [http://www.senseval.org/ SENSEVAL 1-3]<br />
| align="center" | 1998-2004<br />
| <br />
| Umbrella site<br />
|-<br />
| [http://www.senseval.org/senseval3 SENSEVAL 3]<br />
| align="center" | 2004<br />
| Barcelona, Spain<br />
| [http://aclweb.org/anthology-new/W/W04/#0800 Proceedings]<br />
|-<br />
| SENSEVAL 2<br />
| align="center" | 2001<br />
| Toulouse, France<br />
| - copy of website at [http://web.archive.org/web/20050507011044/http://www.sle.sharp.co.uk/senseval2/ Internet Archive]<br />
|-<br />
| [http://www.itri.brighton.ac.uk/events/senseval/ARCHIVE/index.html SENSEVAL 1]<br />
| align="center" | 1998<br />
| East Sussex, UK<br />
| <br />
|-<br />
|}<br />
<br />
<br />
[http://www.clres.com/siglex.html SIGLEX] is the umbrella organization for the SemEval semantic evaluations and the SENSEVAL word-sense evaluation exercises.<br />
<br />
<br />
[[Category:SemEval Portal]]</div>Kenskihttps://aclweb.org/aclwiki/index.php?title=SemEval_Portal&diff=8178SemEval Portal2010-09-25T19:43:00Z<p>Kenski: </p>
<hr />
<div>::{| border="1" cellpadding="7" cellspacing="0" <br />
|-<br />
! Event<br />
! Year<br />
! Location<br />
! Notes<br />
|-<br />
| [[SemEval 3]]<br />
| align="center" | to be determined<br />
| to be determined<br />
| - discussion at [http://groups.google.com/group/semeval3 SemEval 3 Group]<br />
|-<br />
| [http://semeval2.fbk.eu/semeval2.php SemEval 2]<br />
| align="center" | 2010<br />
| Uppsala, Sweden<br />
| [http://aclweb.org/anthology-new/S/S10/ Proceedings]<br />
|-<br />
| [http://nlp.cs.swarthmore.edu/semeval/index.php SemEval 1]<br />
| align="center" | 2007 <br />
| Prague, Czech Republic<br />
| - copy of website at [http://web.archive.org/web/20080727062358/http://nlp.cs.swarthmore.edu/semeval/index.php Internet Archive]<br />
|-<br />
| [http://www.senseval.org/ SENSEVAL 1-3]<br />
| align="center" | 1998-2004<br />
| <br />
| Umbrella site<br />
|-<br />
| [http://www.senseval.org/senseval3 SENSEVAL 3]<br />
| align="center" | 2004<br />
| Barcelona, Spain<br />
|<br />
|-<br />
| SENSEVAL 2<br />
| align="center" | 2001<br />
| Toulouse, France<br />
| - copy of website at [http://web.archive.org/web/20050507011044/http://www.sle.sharp.co.uk/senseval2/ Internet Archive]<br />
|-<br />
| [http://www.itri.brighton.ac.uk/events/senseval/ARCHIVE/index.html SENSEVAL 1]<br />
| align="center" | 1998<br />
| East Sussex, UK<br />
| <br />
|-<br />
|}<br />
<br />
<br />
[http://www.clres.com/siglex.html SIGLEX] is the umbrella organization for the SemEval semantic evaluations and the SENSEVAL word-sense evaluation exercises.<br />
<br />
<br />
[[Category:SemEval Portal]]</div>Kenskihttps://aclweb.org/aclwiki/index.php?title=SemEval_Portal&diff=8177SemEval Portal2010-09-25T19:36:06Z<p>Kenski: </p>
<hr />
<div>::{| border="1" cellpadding="7" cellspacing="0" <br />
|-<br />
! Event<br />
! Year<br />
! Location<br />
! Notes<br />
|-<br />
| [[SemEval 3]]<br />
| align="center" | to be determined<br />
| to be determined<br />
| - discussion at [http://groups.google.com/group/semeval3 SemEval 3 Group]<br />
|-<br />
| [http://semeval2.fbk.eu/semeval2.php SemEval 2]<br />
| align="center" | 2010<br />
| Uppsala, Sweden<br />
| <br />
|-<br />
| [http://nlp.cs.swarthmore.edu/semeval/index.php SemEval 1]<br />
| align="center" | 2007 <br />
| Prague, Czech Republic<br />
| - copy of website at [http://web.archive.org/web/20080727062358/http://nlp.cs.swarthmore.edu/semeval/index.php Internet Archive]<br />
|-<br />
| [http://www.senseval.org/ SENSEVAL 1-3]<br />
| align="center" | 1998-2004<br />
| General Link<br />
|<br />
|-<br />
| [http://www.senseval.org/senseval3 SENSEVAL 3]<br />
| align="center" | 2004<br />
| Barcelona, Spain<br />
|<br />
|-<br />
| SENSEVAL 2<br />
| align="center" | 2001<br />
| Toulouse, France<br />
| - copy of website at [http://web.archive.org/web/20050507011044/http://www.sle.sharp.co.uk/senseval2/ Internet Archive]<br />
|-<br />
| [http://www.itri.brighton.ac.uk/events/senseval/ARCHIVE/index.html SENSEVAL 1]<br />
| align="center" | 1998<br />
| East Sussex, UK<br />
| <br />
|-<br />
|}<br />
<br />
<br />
[http://www.clres.com/siglex.html SIGLEX] is the umbrella organization for the SemEval semantic evaluations and the SENSEVAL word-sense evaluation exercises.<br />
<br />
<br />
[[Category:SemEval Portal]]</div>Kenski