The First Workshop on Large Language Model Memorization (L2M2)

Event Notification Type: 
Call for Papers
Abbreviated Title: 
L2M2 workshop
Location: 
ACL 2025
Thursday, 31 July 2025 to Friday, 1 August 2025
Country: 
Austria
City: 
Vienna
Contact: 
Robin Jia
Eric Wallace
Yangsibo Huang
Tiago Pimentel
Pratyush Maini
Verna Dankers
Johnny Wei
Pietro Lesci
Submission Deadline: 
Tuesday, 25 March 2025

Call for Papers
Large language models (LLMs) are known to memorize their training data. In recent years, this phenomenon has inspired multiple distinct research directions. Some researchers currently focus on analyzing and understanding LLM memorization, attempting to localize memorized knowledge within a model or understand which examples are most likely to be memorized. Other researchers aim to edit or remove information that an LLM has memorized. Still, others study the downstream implications of LLM memorization, including legal concerns associated with memorizing copyrighted articles, privacy risks associated with LLMs leaking private information, and concerns that LLMs cheat on benchmarks by memorizing test data.

The First Workshop on Large Language Model Memorization (L2M2), co-located with ACL 2025 in Vienna, seeks to provide a central venue for researchers studying LLM memorization from these different angles. We invite paper submissions on all topics related to LLM memorization, including but not limited to:

  • Behavioral analyses that seek to understand what training data is memorized by models.
  • Methods for measuring the extent to which models memorize training data.
  • Interpretability work analyzing how models memorize training data.
  • The relationship between training data memorization and membership inference.
  • Analyses of how memorization of training data is related to generalization and model capabilities.
  • Methods or benchmarks for preventing models from outputting memorized data, such as machine unlearning, model editing, or output filtering techniques.
  • Model editing techniques for modifying knowledge that models have memorized.
  • Legal implications and risks of LLM memorization.
  • Privacy and security risks of LLM memorization.
  • Implications of memorization for benchmarking, such as data contamination concerns.

Any models that process or generate text are in scope for this workshop, including text-only language models, vision language models, machine translation models, etc. Memorization in the context of L2M2 could refer to verbatim memorization of training data, approximate memorization of copyrighted content, memorization of factual knowledge, memorization of correct answers to questions, or other definitions.

If you have any questions about whether your paper is suitable for our workshop, feel free to contact us at l2m2-workshop@googlegroups.com.

Submission Guidelines
We seek submissions of at least 4 and at most 8 pages, not including references. All submissions will be reviewed in a single track, regardless of length. Please format your papers using the standard ACL format. We will accept direct submissions via OpenReview, as well as submissions made through ACL Rolling Review.

We accept two types of submissions:

  1. Archival: Original work that will be included in the workshop proceedings. Submissions must be anonymized and will go through a double-blind review process.
  2. Non-archival: Such submissions will not be included in the proceedings. The workshop will accept two categories of non-archival work:
    • Work published elsewhere: The authors should indicate the original venue, and will be accepted if the organizers think the work will benefit from exposure to the audience of this workshop. Submissions do not need to be anonymized.
    • Work-in-progress: These may report on work in progress that have not yet been published at a different venue. The acceptance will be based on a double-blind review and thus submissions of this type need to be anonymized.

Dual submissions
We allow submissions that are also under review in other venues, but please note that many conferences do not allow it, so make sure that you do not violate their policy as well.

Anonymity period
We do not have an anonymity deadline. Preprints are allowed, both before the submission deadline as well as after.

Important Dates

  • Latest submission date for ARR submissions: February 15, 2025
  • Direct submission deadline: April 15, 2025
  • Pre-reviewed (ARR) commitment deadline: May 20, 2025
  • Notification of acceptance: June 10, 2025
  • Camera-ready paper deadline: June 17, 2025
  • Workshop dates: July 31st - August 1st 2025

Note: all deadlines are 11:59PM UTC-12:00.