SemEval-2025 Task 1 - Call for participation - Advancing Multimodal Idiomaticity Representation

Event Notification Type: 
Call for Participation
Abbreviated Title: 
SemEval-2025 AdMIRe
Location: 
State: 
Country: 
City: 
Contact: 
Tom Pickard
Wei He
Maggie Mi
Dylan Phelps
Carolina Scarton
Marco Idiart
Aline Villavicencio
Submission Deadline: 
Friday, 31 January 2025

————————————————————————————
AdMIRe - Advancing Multimodal Idiomaticity Representation
————————————————————————————
https://semeval2025-task1.github.io/

We are delighted to invite you to participate in the SemEval-2025 Shared Task on Idiomaticity Detection in Images, a novel challenge focused on advancing multimodal understanding through the lens of idiomatic expressions in visual contexts.

Comparing the performance of language models (including large LLMs) to humans shows that models lag behind humans in comprehension of idioms (Tayyar Madabushi et al., 2021; Chakrabarty et al., 2022a; Phelps et al., 2024).
As idioms are believed to be conceptual products and humans understand their meaning from interactions with the real world involving multiple senses (Lakoff and Johnson, 1980; Benczes, 200), we build on the previous SemEval-2022 Task 2 (Madabushi et al., 2022) and seek to explore the comprehension ability of multimodal models. In particular, we focus on models that incorporate visual and textual information to test how well they can capture representations and whether multiple modalities can improve these representations.

Good representations of idioms are crucial for applications such as sentiment analysis, machine translation and natural language understanding. Exploring ways to improve models’ ability to interpret idiomatic expressions can enhance the performance of these applications. For example, due to poor automatic translation of an idiom, the winner of Eurovision 2018 appeared to be called a ‘real cow’ instead of a ‘real darling’! Our hope is that this task will help the NLP community to better understand the limitations of contemporary language models and to make advances in idiomaticity representation.

——————
Task Details
——————
We present two subtasks which uses representations of meaning using visual and visual-temporal modalities, across two languages, English and Portuguese. The two subtasks are:

Subtask A - Static Images
Participants will be presented with a set of 5 images and a context sentence in which a particular potentially idiomatic nominal compound (NC) appears.
The goal is to rank the images according to how well they represent the sense in which the NC is used in the given context sentence.

Subtask B - Image Sequences (or Next Image Prediction)
Participants will be given a target expression and an image sequence from which the last of 3 images has been removed, and the objective will be to select the best fill from a set of candidate images.
The NC sense being depicted (idiomatic or literal) will not be given, and this label should also be output.

————
Dataset
————
We provide training, development and test data for each of the subtasks.
The training and development dataset splits are available. Evaluation will be performed on a separate test set.
See website for more detailed information: https://semeval2025-task1.github.io

—————
Timeline
—————
Sample data available: 16 July 2024
Subtask A (English) Training data now available
Subtask B (English) Training data now available
Portuguese training data coming soon
Evaluation start 10 January 2025
Evaluation end by 31 January 2025
Paper submission due 28 February 2025
Notification to authors 31 March 2025
Camera ready due 21 April 2025
SemEval workshop Summer 2025 (co-located with a major NLP conference)
All deadlines are 23:59 UTC-12 ("anywhere on Earth”).

——————
Participation
——————
To discuss the task and ensure you receive information about future developments, join the mailing list. (admire-semeval-2025 [at] googlegroups.com)

More information can also be found on our website: https://semeval2025-task1.github.io

———————
Organisers
———————
Tom Pickard, University of Sheffield, UK
Wei He, University of Sheffield, UK
Maggie Mi, University of Sheffield, UK
Dylan Phelps, University of Sheffield, UK
Carolina Scarton, University of Sheffield, UK
Marco Idiart, Federal University of Rio Grande do Sul, Brazil
Aline Villavicencio, University of Exeter; University of Sheffield, UK