** Final submission reminder, submissions due in one week ***
Workshop description
Digital technologies have brought myriad benefits for society, transforming how people connect, communicate and interact with each other. However, they have also enabled harmful and abusive behaviours to reach large audiences and for their negative effects to be amplified, including interpersonal aggression, bullying and hate speech. Already marginalised and vulnerable communities are often disproportionately at risk of receiving such abuse, compounding social inequality and injustice. For instance, Amnesty International reports that women are 27 times more likely to be the target of online harassment.
As academics, civil society, policymakers and tech companies devote more resources and effort to tackling online abuse, there is a pressing need for scientific research that critically and rigorously investigates how it is defined, detected and countered. Technical disciplines such as machine learning (ML), natural language processing (NLP) and statistics have made substantial advances in this field. However, concerns have been raised about the potential societal biases that many of automated detection systems reflect, propagate and sometimes amplify. For example, many systems have different error rates for content produced by different groups of people (such as having higher error on content produced in AAL) or perform better at detecting certain types of abuse than others. These issues are magnified by the lack of explainability and transparency in most abusive content detection systems. And these are not purely engineering challenges, but raise fundamental social questions of fairness and harm: any interventions that employ biased, inaccurate or unrobust models to detect and moderate online abuse could end up exacerbating the social injustices they aim to counter.
For the fifth edition of the Workshop on Online Abuse and Harms (5th WOAH!) we advance research in online abuse through our theme: Social Bias and Unfairness in Online Abuse Detection. We continue to emphasize the need for inter-, cross- and anti- disciplinary work on online abuse and harms, and invite paper submissions from a range of fields. These include but are not limited to: NLP, machine learning, computational social sciences, law, politics, psychology, network analysis, sociology and cultural studies. Continuing the tradition started in WOAH 4, we invite civil society, in particular individuals and organisations working with women and marginalised communities who are often disproportionately affected by online abuse, to submit reports, case studies, findings, data, and to record their lived experiences. We hope that through these engagements WOAH can directly address the issues faced by those on the front-lines of tackling online abuse.
Shared task
We are pleased to announce that this year we will be hosting a shared task on the multi-modal Hateful Memes dataset on predicting fine-grained attributes (e.g. attack type or identity) for the dataset to encourage further research into multi-modal hate speech. For more information see: https://www.workshopononlineabuse.com/cfp/shared-task-on-hateful-memes
Joint session with the MWE Workshop
We are hosting a 1-hour session with the Multi Word Expression workshop (MWE) to explore the question of how multi word expressions (e.g. “sweep under the rug”) may factor into the detection of online abuse. We believe that considering multi word expressions in abusive language can be beneficial to both the WOAH community and the MWE community, through opening an avenue of research for the WOAH community and an additional testbed for MWE processing technology. The main goal of the joint sessions is to pave the way towards the creation of a dataset for a shared task that can involve both communities. Submissions describing research on MWEs and abusive language, especially introducing new datasets, are also welcome.
Timeline
Submission deadline: April 26, 2021
Notification date: June 4, 2021
Camera-ready date: June 30, 2021
Contributions
We invite academic/research papers on any of the following topics. We also invite civil society reports.
Related to developing computational models and systems:
- NLP and Computer Vision models and methods for detecting abusive language online, including, but not limited to hate speech, gender-based violence, cyberbullying etc.
- Application of NLP and Computer Vision tools to analyze social media content and other large data sets
- NLP and Computer Vision models for cross-lingual abusive language detection/li>
- Computational models for multi-modal abuse detection/li>
- Development of corpora and annotation guidelines/li>
- Critical algorithm studies with a focus on content moderation technology/li>
- Human-Computer Interaction for abusive language detection systems /li>
- Best practices for using NLP and Computer Vision techniques in watchdog settings/li>
- Submissions addressing interpretability and social biases in content moderation technologies/li>
Related to legal, social, and policy considerations of abusive language online:
- The social and personal consequences of being the target of abusive language and targeting others with abusive language
- Assessment of current (computational and non-computational) methods of addressing abusive language
- Legal ramifications of measures taken against abusive language use
- Social implications of monitoring and moderating unacceptable content
- Considerations of implemented and proposed policies for dealing with abusive language online and the technological means of dealing with it.
Submission Information
Submission link: https://www.softconf.com/acl2021/w02_woah2021
We will be using the ACL-IJCNLP 2021 Submission Guidelines for submissions. Authors are invited to submit a full paper of up to 8 pages of content and short papers of up to 4 pages of content, with unlimited pages for references. We also invite unarchived abstract submissions of up to 2 pages, including 2 additional page for references, and civil society reports. Accepted papers will be given an additional page of content to address reviewer comments. We also invite papers which describe systems.
Previously published papers cannot be accepted. Papers that are currently undergoing review at other venues are welcome. The received submissions will be reviewed by the program committee. As reviewing will be blind, please ensure that papers are anonymous. Self-references that reveal the author's identity, e.g., "We previously showed (Smith, 1991) ...", should be avoided. Instead, use citations such as "Smith previously showed (Smith, 1991) ...".
We have also included conflict of interest in the submission form. You should mark all potential reviewers who have been authors on the paper, are from the same research group or institution, or who have seen versions of this paper or discussed it with you.
Finally, we request that all papers adhere to our submission policies that can be found on our website.