First Call for Participation - XLLM Workshop @ ACL 2025

Event Notification Type: 
Call for Papers
Abbreviated Title: 
[CFP] XLLM Workshop 2025
Location: 
Vienna, Austria
Thursday, 31 July 2025 to Friday, 1 August 2025
Country: 
Austria
City: 
Vienna
Contact: 
Hao Fei
Kewei Tu
Submission Deadline: 
Tuesday, 18 March 2025

The 1st Joint Workshop on Large Language Models and Structure Modeling (XLLM 2025) 2025 @ ACL 2025
31 July-1 August 2025 - Vienna, Austria

– First Call for Participation: paper submissions and shared tasks.
– Learn More from the Workshop Website: https://xllms.github.io/

Are you passionate about structure prediction or modeling with LLMs in NLP, such as syntactic/semantic, or even information extraction and structured sentiment analysis, etc? Our mission is to answer two questions: is NLP structure modeling still worth exploring in the LLM era? do structure modeling methods and tasks before LLMs still hold value? Join us for XLLM 2025, the 1st workshop on LLMs and structure modeling, where we delve into the various methodologies and applications of structure-aware LLMs and NLP. We call for both paper submissions and participation in shared tasks.

============================================

1. Workshop Description

Language structure modeling has long been a crucial subfield of natural language processing (NLP) that entails understanding the underlying semantic or syntactic structure of language and texts. Language structures can broadly range from low-level morphological/syntactic types (e.g., dependency structures and phrasal constituent structures) to high-level discourse/semantic structures (e.g., semantic parsing, semantic role labeling, abstract meaning representation), and can even extend to more NLP applications, multi-lingual and multi-modal scenarios in a broader sense, such as information extraction and structured sentiment analysis, etc. In previous days, modeling, inferring, and learning about linguistic structures constituted an indispensable component in many NLP systems and were the key focus of a large proportion of NLP research.

The methodologies and paradigms concerning language structure modeling have always changed dramatically since each deep learning revolution started around a decade ago. In the last two to three years, Large Language Models (LLMs) have emerged, demonstrating unprecedented language understanding and generalization capabilities in effectively addressing a wide range of tasks. This raises a critical question: Is NLP structure modeling still worth exploring in the LLM era? Do the methods and tasks before LLMs still hold value?

On the one hand, we wonder whether previous NLP structure modeling tasks, such as those concerning morphological/syntactic/semantic/discourse structures and high-level structure-aware applications, can achieve even stronger task performance with the powerful capabilities of LLMs.

On the other hand, we are also considering whether it is still necessary to model the underlying structures of language, given that large-scale pretraining on the surface form alone can endow LLMs with extraordinarily powerful language capabilities. In particular, can language structure modeling be beneficial for improving or understanding LLMs?

Thus, this 1st Joint XLLM Workshop at ACL 2025 aims to encourage discussions and highlight methods for language structure modeling in the era of LLMs. Specifically, we will explore two main directions: LLM for Structure Modeling (LLM4X) and Structure Modeling for LLM (X4LLM).

============================================

2. Call for Paper

A. LLM for Structure Modeling (LLM4X):
* Low-level Syntactic Parsing and Methods
** Morphological Parsing
** Dependency Parsing/Constituency Parsing
** Low-resource/Cross-lingual Syntactic Parsing
** Head-driven Phrase Structure Grammar Parsing
** Unsupervised Grammar Induction
** Cross-modal Parsing/Vision-Language Grammar Induction
* High-level Semantic Parsing and Methods
** Semantic Dependency Parsing
** Frame Parsing
** Semantic Role Labeling
** Abstract Meaning Representation
** Uniform Meaning Representation
** Universal Decompositional Semantic Parsing
** Universal Conceptual Cognitive Annotation
** Rhetorical Structure Theory (RST) Parsing
** Conversation Discourse Parsing
** Low-resource/Cross-lingual Semantic Parsing
* Broader Structure-aware Applications and Methods
** Information Extraction (IE): NER, RE, EE
** Structured Sentiment Analysis (SSA), Aspect-based Sentiment Analysis (ABSA)
** Low-resource/Cross-lingual IE/SSA/ABSA
** Cross-modal IE/SSA/ABSA
** Text-to-SQL
** Table Parsing
** Document Parsing
** Universal Structure Parsing/Modeling
** Human-centered Parsing with LLM
** Robustness Analysis of LLM-based Parsing

B. Structure Modeling for LLM (X4LLM):
* Linguistic/Mathematical arguments for or against the utility of structures in LLM
* Empirical studies of the utility of structures in LLM
* Integration of structures into LLM architectures
* Incorporation of structures as additional input or output in LLM
* Incorporation of training signals from structures in LLM pre-training and post-training
* LLM prompting with linguistic rules and structural information
* Analyses and interpretation of LLM through the lens of structures

We welcome two types of papers: regular papers and non-archival extended abstracts (on work in progress or work that has already appeared in or been accepted by another venue). We also set **Best Paper Awards**. In addition to papers submitted directly to the workshop, which will be reviewed by our Programme Committee, we also accept papers reviewed through ACL Rolling Review and committed to the workshop. Please check the workshop website for details.

============================================

3. Shared Tasks
We have set up three shared tasks as follows. Participants can access the respective task pages to learn about the specific participation requirements. System submissions will be evaluated using automatic metrics, with a focus on the accuracy and relevance of the results. Participants can submit at Codabench. Teams that achieve top rankings in the shared tasks will receive **Cash Prizes**. Winning participants are required to write a technical paper that fully describes the techniques and experimental results used. Further details can be found on the task website: https://xllms.github.io/

Task-I: Dialogue-Level Dependency Parsing (DiaDP)
DiaDP aims to build a unified word-wise dependency tree for dialogue contexts. The tree integrates both inner-EDU dependencies (within Elementary Discourse Units, EDUs) and inter-EDU dependencies (across EDUs) to represent the syntactic and discourse relationships between words in dialogues. Given a dialogue consisting of multiple utterances segmented into EDUs, where each utterance is treated as a sentence-like unit, DiaDP outputs a structured dependency tree that includes: 1) Inner-EDU dependencies: Syntactic relationships within individual EDUs; 2) Inter-EDU dependencies: Discourse relationships connecting different EDUs, including cross-utterance links. We set zero-shot and few-shot learning settings, respectively.

Task-II: Speech Event Extraction (SpeechEE)
SpeechEE aims to detect event predicates and arguments directly from audio speech, enabling information acquisition from spoken content such as meetings, interviews, and press releases. The SpeechEE is defined as: Given a speech audio input consisting of a sequence of acoustic frames, the goal is to extract structured event records comprising four elements: 1) the event type, 2) the event trigger, 3) event argument roles, and 4) the corresponding event arguments.

Task-III: LLM for Structural Reasoning (LLM-SR)
LLM-SR seeks to generate a controllable and interpretable reasoning process by leveraging structural reasoning. LLM-SR requires the structural parsing of two distinct components: major premises and minor premises, then involving identifying fine-grained “alignments” between these two structures and ultimately deriving a conclusion. This task can be regarded as a constrained Chain-of-Thought (CoT) reasoning process, where reasoning is conducted step by step with reference to facts and relevant rules, thereby improving the transparency and reliability of the process.

Task-IV: Document-level Information Extraction (DocIE)
DocIE focuses on extracting information from long documents rather than isolated sentences, necessitating the integration of information both within and across multiple sentences while capturing complex interactions. Given a document and a predefined schema, DocIE requires the extraction of each instance (which may be null) corresponding to the schema's elements. This process involves identifying: (1) types of entities, (2) coreference relationships among mentions, (3) types of relations, and (4) the head and tail entities of each identified relation.

============================================

4. Important Dates
All deadlines are specified in AoE.

A. Workshop Timeline:
* Direct workshop paper submission deadline: 18 March, 2025
* ARR pre-reviewed workshop paper commitment deadline: 25 March, 2025
* Acceptance notification of all papers: 30 April, 2025
* Camera-ready paper deadline: 16 May, 2025
* Pre-recorded video due (hard deadline): 7 July, 2025
* Workshop dates (TBD): 31 July - 1 August, 2025

B. Shared Task Timeline:
* Training data and participant instruction release for all shared tasks: 10 February, 2025
* Evaluation deadline for all shared tasks: 30 March, 2025
* Notification of all shared tasks: 5 April, 2025
* Shared-task paper submission deadline: 12 April, 2025
* Acceptance notification of all papers: 30 April, 2025

============================================

5. Contact

Please email xllm2025 [at] googlegroups.com if you have any questions related to the workshop and shared tasks.