[COLING Workshop] The 2nd Workshop on Scaling Up Multilingual & Multi-Cultural Evaluation
First Call for Papers for SUMEval 2025 at COLING 2025: Deadline 18th Oct 2024
First Call for Papers for SUMEval 2025 at COLING 2025: Deadline 18th Oct 2024
Workshop Description
The 9th Conference on Machine Translation (WMT24), collocated with EMNLP 2024, will be featuring this year the Shared Task on evaluation of Automatic Metrics. We are looking for both reference-based metrics and reference-free metrics to evaluate the quality of MT systems. We’ll be using expert-based MQM annotations on English-German, English-Spanish and Japanese⇾Chinese as the primary gold standard for evaluating metrics. Details are at http://www2.statmt.org/wmt24/metrics-task.html.
The “test suites” sub-task will be included for the sixth time in the
General MT Shared Task of the Conference on Machine Translation
(WMT24).
*OVERVIEW*
Calling all NLP, Digital Humanities and media analysis enthusiasts! Participate in the "Framing the Israel War on Gaza" (FIGNEWS) shared task and play a pivotal role in shaping media narrative research. Engage in creating guidelines, annotating a diverse multilingual corpus, and pushing the boundaries of NLP!
Task Highlights:
1. Guidelines Creation: Craft comprehensive annotation guidelines and set a benchmark in NLP research.
Background
We invite you to participate and submit your work to the First Workshop on Data Contamination (CONDA) co-located with ACL 2024 in Bangkok, Thailand.
the 4th Workshop on Evaluation and Comparison for NLP systems (Eval4NLP), co-located at the 2023 Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics (AACL 2023), invites the submission of long and short papers, with a theoretical or experimental nature, describing recent advances in system evaluation and comparison in NLP.
The 4th Workshop on Evaluation and Comparison for NLP systems (Eval4NLP), co-located at the 2023 Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics (AACL 2023), invites the submission of long and short papers, with a theoretical or experimental nature, describing recent advances in system evaluation and comparison in NLP.
Dear colleagues,
you are invited to participate in the Eval4NLP 2023 shared task on **Prompting Large Language Models as Explainable Metrics**.
Please find more information below and on the shared task webpage: https://eval4nlp.github.io/2023/shared-task.html
Important Dates