BlackboxNLP 2024: The 7th Workshop on Analysing and Interpreting Neural Networks for NLP

Event Notification Type: 
Call for Papers
Abbreviated Title: 
BlackboxNLP 2024
Location: 
EMNLP 2024
Friday, 15 November 2024
State: 
Florida
Country: 
USA
City: 
Miami
Contact: 
Najoung Kim
Yonatan Belinkov
Jaap Jumelet
Hosein Mohebbi
Aaron Mueller
Hanjie Chen
Submission Deadline: 
Thursday, 15 August 2024

BlackboxNLP 2024: Analyzing and interpreting neural networks for NLP -- EMNLP 2024
When: November 15 or 16, 2024
Where: EMNLP 2024, Miami
Website: https://blackboxnlp.github.io

Workshop description
-----------------------------
Many recent performance improvements in NLP have come at the cost of understanding of the systems.
How do we assess what representations and computations models learn?
How do we formalize desirable properties of interpretable models, and measure the extent to which existing models achieve them?
How can we build models that better encode these properties?
What can new or existing tools tell us about these systems’ inductive biases?

The goal of this workshop is to bring together researchers focused on interpreting and explaining NLP models by taking inspiration from fields such as machine learning, psychology, linguistics, and neuroscience.
We hope the workshop will serve as an interdisciplinary meetup that allows for cross-collaboration.

Topics of interest include, but are not limited to:
* Applying analysis techniques from neuroscience to analyze high-dimensional vector representations in artificial neural networks;
* Analyzing the network’s response to strategically chosen input in order to infer the linguistic generalizations that the network has acquired;
* Examining network performance on simplified or formal languages;
* Mechanistic interpretability, reverse engineering approaches to understanding particular properties of neural models;
* Proposing modifications to neural architectures that increase their interpretability;
* Testing whether interpretable information can be decoded from intermediate representations;
* Explaining specific model predictions made by neural networks;
* Generating and evaluating the quality of adversarial examples in NLP;
* Developing open-source tools for analyzing neural networks in NLP;
* Evaluating the analysis results: how do we know that the analysis is valid?

BlackboxNLP 2024 is the seventh BlackboxNLP workshop. The programme and proceedings of the previous editions can be found on the workshop website.

Submissions
-----------------
We call for two types of papers:
1) Archival papers. These are papers reporting on completed, original and
unpublished research, with a maximum length of 8 pages + references. Papers
shorter than this maximum are also welcome. Accepted papers are expected to
be presented at the workshop and will be published in the workshop
proceedings. They should report on obtained results rather than intended
work. These papers will undergo double-blind peer-review, and should thus
be anonymized.
2) Extended abstracts. These may report on work in progress or may be cross
submissions that have already appeared in a non-NLP venue. The extended
abstracts are of maximum 2 pages + references. These submissions are
non-archival in order to allow submission to another venue. The selection
will not be based on a double-blind review and thus submissions of this
type need not be anonymized.

Submissions should follow the official EMNLP 2024 style guidelines.
The submission site is: https://openreview.net/group?id=EMNLP/2024/Workshop/BlackBoxNLP

Contact
---------------------
Please contact the organizers at blackboxnlp [at] googlegroups.com for any questions.
You can also find more information on our website, https://blackboxnlp.github.io/.

Important dates
---------------------
August 15, 2023 – Submission deadline.
September 30, 2023 – Notification of acceptance.
October 4, 2023 – Camera-ready papers due.
November 15 or 16, 2023 – Workshop.
Note: All deadlines are 11:59PM UTC-12:00, 'anywhere on earth'.