Second Call for Papers: The First Workshop on Dynamic Adversarial Data Collection (DADC)

Event Notification Type: 
Call for Papers
Location: 
co-located with NAACL 2022
Thursday, 14 July 2022
State: 
Washington
Country: 
United States
City: 
Seattle
Contact: 
Max Bartolo
Hannah Kirk
Pedro Rodriguez
Katerina Margatina
Submission Deadline: 
Friday, 8 April 2022

We are pleased to announce the call for papers for the First Workshop on Dynamic Adversarial Data Collection (DADC), taking place on the 14-15th July, 2022 and co-located with NAACL 2022 in Seattle, Washington. Full details on the workshop, including topics of interest, important deadlines, and instructions for authors are here:

https://dadcworkshop.github.io/

Dynamic Adversarial Data Collection (DADC) has been gaining traction in the NLP research community as a promising approach to improving data collection practices, model evaluation and task performance. DADC allows the dynamic collection of human-written data with models in the loop. Human annotators can be tasked with finding adversarial examples that fool current state-of-the-art models (SOTA) or they can cooperate with assistive models-in-the-loop to find interesting examples.

Recently, various efforts have shown that the DADC process yields richer training datasets for tasks such as Question Answering, Natural Language Inference, Sentiment Analysis, and Hate Speech Detection. Research on DADC-based approaches has also found that it provides a more realistic evaluation setting, that the benefits of DADC can be scaled using synthetic data generation, and that generative assistants can improve both collection efficiency and effectiveness.

Building on this interest in the community, we would like to invite researchers to share their latest work in designing and understanding methods around dynamic adversarial data collection methods. We welcome work on (but not limited to) the following topics:

* Dynamic Benchmarking: Improving or understanding dynamic benchmarking and evaluation, along with investigation into the roles of humans and models for better evaluation, more reliable metrics, etc.
* Data Collection: Dynamic and/or adversarial data collection for the purposes of collecting model training data including the design of creative interfaces, system design, improving annotator engagement, reducing annotator artifacts or bias, investigating the benefits of approaches such as expert annotation or crowdsourcing, and improving data quality and efficiency.
* Active Learning for Dynamic Adversarial Data Collection: Investigating ways to incorporate existing or new active learning algorithms into the DADC pipeline.
* Model Design, Interpretability and Bias: Developing models that are more robust in dynamic and/or adversarial settings, as well as understanding the roles of human and model competition or collaboration for improved model robustness, interpretability, bias mitigation and algorithmic fairness.
* Applications to New Tasks: Extending the ideas behind dynamic adversarial data collection to new tasks, such as generative, multi-lingual and multimodal tasks.
* Analysis and Limitations of Current Approaches: Critiques or discussions of the limitations of traditional and/or dynamic adversarial data collection approaches and model training and evaluation methodologies.

We will have an archival track as well as a non-archival track. Archival track submissions either go through a standard double-blind review process, or can be submitted with ARR reviews. The non-archival track seeks recently published work -- it does not need to be anonymized and will not go through the review process. The submission should clearly indicate the original venue and will be accepted if the committee thinks the work will benefit from exposure to the workshop audience. Non-archival papers will not be included in the workshop proceedings. For archival and non-archival papers, we accept short papers (4 pages of content + references) and long papers (8 pages of content + references).

We will also be running a Shared Task competition focused on better annotation, better training data and better models. For more details and participation instructions see https://dadcworkshop.github.io/shared-task.html.

*Important Dates*
April 8th, 2022: Submission deadline (for papers requiring peer review)
May 1st, 2022: Submission deadline (with ARR reviews)
May 1st, 2022: Submission deadline (Non-archival)
May 6th, 2022: Notification of acceptance
May 20th, 2022: Camera-ready papers due
July 14th-15th, 2022: Workshop date

All submission deadlines are 11:59 PM GMT-12 (anywhere in the world) unless otherwise noted.

Please contact the conference organizers at dadc-workshop [at] googlegroups.com with any questions.