Difference between revisions of "2020Q3 Reports: Tutorial Chairs"
AgataSavary (talk | contribs) |
AgataSavary (talk | contribs) |
||
(12 intermediate revisions by the same user not shown) | |||
Line 105: | Line 105: | ||
We asked the tutorial teachers to indicate their preferences among the time slots, and – if possible – to agree to give their tutorial twice, so as to increase accessibility. Our final decisions on slot assignments traded off the teacher preferences and slot availability. In the end, the assignments were the following (see also the [https://acl2020.org/blog/intro-to-tutorial-infrastructure/ <u>blog post</u>] and [https://acl2020.org/program/tutorials/ <u>online schedule</u>]): | We asked the tutorial teachers to indicate their preferences among the time slots, and – if possible – to agree to give their tutorial twice, so as to increase accessibility. Our final decisions on slot assignments traded off the teacher preferences and slot availability. In the end, the assignments were the following (see also the [https://acl2020.org/blog/intro-to-tutorial-infrastructure/ <u>blog post</u>] and [https://acl2020.org/program/tutorials/ <u>online schedule</u>]): | ||
− | * twice on slots 3 and 4 | + | * T1 – twice on slots 3 and 4 |
* T2 – once on slot 5 | * T2 – once on slot 5 | ||
* T3 – once on slot 4 | * T3 – once on slot 4 | ||
Line 116: | Line 116: | ||
Unfortunately, the tutorial teachers proved hardly available for slots 1 and 2. | Unfortunately, the tutorial teachers proved hardly available for slots 1 and 2. | ||
− | In addition to time slot arrangements, we gave the tutorial teacher two options on presentation, namely a pre-recorded and a fully interactive form of tutorials. The former allows the teachers to pre-record the main content of their tutorials, and use the time slots mainly for question answering and detailed discussions. The latter is more interactive, | + | In addition to time slot arrangements, we gave the tutorial teacher two options on presentation, namely a pre-recorded and a fully interactive form of tutorials. The former allows the teachers to pre-record the main content of their tutorials, and use the time slots mainly for question answering and detailed discussions. The latter is more interactive, the teachers use the online sessions for both giving the lecture and other activities such as active discussions. All these arrangements were made for taking advantage of the virtual conference. In the end the choices were the following: |
* T1 – pre-recorded | * T1 – pre-recorded | ||
Line 140: | Line 140: | ||
** Integrating [https://www.dory.app/ <u>Dory</u>] for question answering (in T1) | ** Integrating [https://www.dory.app/ <u>Dory</u>] for question answering (in T1) | ||
** Fully interactive sessions, with occasional split into break-out rooms (in T7) | ** Fully interactive sessions, with occasional split into break-out rooms (in T7) | ||
+ | |||
+ | For the technical details about the SlidesLive infrastructure and detailed planning of each live session, the teachers were directly in contact with the SlidesLive team. Two online demos were made available by SlidesLive: about [https://slideslive.com/38931377/how-to-use-zoom-for-live-qa using Zoom] for QA sessions and about how SlidesLive coordinates a live sessions [https://slideslive.com/38931376/slideslive-virtual-conference-demo behind the scene]. SlidesLive also organized a dry run of the live session with the teachers of each tutorial. | ||
To ensure smooth running of the sessions, we sent two rounds of reminders to each tutorial to ensure that each teacher is registered to the conference. We collected rough timelines of each tutorial so that the SlidesLive team can work closely during the live sessions. To further help the teachers prepare their live session content we additionally collected the lists of registered attendees twice before the conference, and shared them with the teachers. | To ensure smooth running of the sessions, we sent two rounds of reminders to each tutorial to ensure that each teacher is registered to the conference. We collected rough timelines of each tutorial so that the SlidesLive team can work closely during the live sessions. To further help the teachers prepare their live session content we additionally collected the lists of registered attendees twice before the conference, and shared them with the teachers. | ||
In the end, the numbers of registrations for each tutorial were: | In the end, the numbers of registrations for each tutorial were: | ||
− | * T1 | + | * T1 – 1762 |
− | * T2 | + | * T2 – 793 |
− | * T3 | + | * T3 – 503 |
− | * T4 | + | * T4 – 571 |
− | * T5 | + | * T5 – 641 |
− | * T6 | + | * T6 – 1288 |
− | * T7 | + | * T7 – 378 |
* T8 – 1087 | * T8 – 1087 | ||
+ | This sums up to 7023 registrations. | ||
===Archiving and Publicity=== | ===Archiving and Publicity=== | ||
− | There are three main channels for publicity, namely the static [http://www.acl2020.org/program/tutorials/ <u>website</u>], the website of the [https://virtual.acl2020.org/tutorials.html <u>virtual conference</u>] and the social media. For the former, we collected photos from each instructor, which are shown with the title and abstract of their tutorials. Following NAACL 2019, we also displayed the time slots as a right-hand-side bar on the tutorial website. Due to special arrangements of the virtual conference, we marked the time zones explicitly. In addition, we asked the tutorial teachers to provide URL links to teaching materials (e.g., at GitHub) during their final submission, and included these links to their tutorial information. For social media, we drafted two blog posts (i.e., [https://acl2020.org/blog/intro-to-tutorial-infrastructure/ this post] and [https://acl2020.org/blog/detailed-modalities-of-tutorials/ this post]) for advertising the tutorials. | + | There are three main channels for publicity, namely the static [http://www.acl2020.org/program/tutorials/ <u>website</u>], the website of the [https://virtual.acl2020.org/tutorials.html <u>virtual conference</u>] (restricted to the registered participants) and the social media (including [https://twitter.com/aclmeeting/ Twitter]). For the former, we collected photos from each instructor, which are shown with the title and abstract of their tutorials. Following NAACL 2019, we also displayed the time slots as a right-hand-side bar on the tutorial website. Due to special arrangements of the virtual conference, we marked the time zones explicitly. In addition, we asked the tutorial teachers to provide URL links to teaching materials (e.g., at GitHub) during their final submission, and included these links to their tutorial information. For social media, we drafted two blog posts (i.e., [https://acl2020.org/blog/intro-to-tutorial-infrastructure/ this post] and [https://acl2020.org/blog/detailed-modalities-of-tutorials/ this post]) for advertising the tutorials. |
− | The archiving consists of three main items, namely the tutorial proceedings, the slides for each tutorial and the videos. For the tutorial proceedings, we collected final abstracts from the authors using the START system. We asked the instructors to sign the CC BY v4 agreement when submitting their final materials on SoftConf. For the slides, we asked the tutorial teachers to provide a first version of the slides by June 17 regardless whether they are pre-recorded or interactive. Some tutorials made further updates to their slides before ACL. For the video recordings, we worked with the ACL Anthology team discussing the technical details since this is the first time the tutorial videos are shared in the website. [Note: this is currently undergoing at the time of writing of this summary] | + | The archiving consists of three main items, namely the tutorial proceedings, the slides for each tutorial and the videos. For the tutorial proceedings, we collected final abstracts from the authors using the START system. We asked the instructors to sign the CC BY v4 agreement when submitting their final materials on SoftConf. For the slides, we asked the tutorial teachers to provide a first version of the slides by June 17 regardless whether they are pre-recorded or interactive. Some tutorials made further updates to their slides before ACL. For the video recordings, we worked with the ACL Anthology team discussing the technical details since this is the first time the tutorial videos are shared in the website. [Note: this is currently undergoing at the time of writing of this summary] One important aspect of for inclusiveness and accessibility are tutorial captions. With the help of SlidesLive, automatic captions generated for pre-recording were collected and sent to the teachers for edition. They should be included in the tutorial archives. |
===Lessons Learned=== | ===Lessons Learned=== | ||
Line 165: | Line 168: | ||
It is also important to announce the individual modalities with the attendees early enough. For instance, watching pre-recordings in advance was necessary for some tutorials (since their live sessions only included question answering) and such constrains should have been known more in advance. | It is also important to announce the individual modalities with the attendees early enough. For instance, watching pre-recordings in advance was necessary for some tutorials (since their live sessions only included question answering) and such constrains should have been known more in advance. | ||
− | If services of SlidesLive or alike are used, it is crucial that the teachers understand the technical modalities (what is the added value from these services as compared to Zoom | + | If services of SlidesLive or alike are used, it is crucial that the teachers understand the technical modalities (what is the added value from these services as compared to virtual meeting tools like Zoom only, how are the live sessions set up based on pre-recorded material and live interaction, how are they technically coordinated, etc.). If these technical details are unclear, some teachers may be reluctant to cooperate. |
One thing to remember which we missed is to publish the reading list for each tutorial (which the teachers submit with the proposal) on the tutorial website. | One thing to remember which we missed is to publish the reading list for each tutorial (which the teachers submit with the proposal) on the tutorial website. |
Latest revision as of 01:32, 24 July 2020
Tutorial Chairs:
Agata Savary, University of Tours, France
Yue Zhang, Westlake University, Hangzhou, China
Preparatory Work, Call, Reviewing and Decision-Making
The call, submission, reviewing and selection of tutorials was coordinated jointly for 4 conferences: ACL, AACL-IJCNLP, COLING and EMNLP.
Before drafting the call, we collected lists of tutorials offered within the past 4 years. We analysed previous calls for tutorials and reports from tutorial chairs (from 2016, 2017, 2018 and 2019). We consulted previous tutorial chairs with a questionnaire including questions about: the number of submissions, encouraging submissions on specific topics or from specific lecturers, the review procedure, the evaluation criteria, the post-tutorial availability of the slides/codes, and lessons learned from tutorial coordination. We also discussed the publication of slides and video recordings from future tutorials with the persons in charge of the ACL Anthology. As a result of these steps, we created two new sections for the ACL Conference Handbook (future chairs might consider updating these documents yearly):
- the list of past tutorials at ACL, COLING, EACL, EMNLP, and NAACL in 2016-2019
- a tutorial chair handbook
The final call differs from previous calls in several aspects: (i) the expectations about tutorial proposals were made clearer, (ii) following the central ACL decision, the teachers' payment policy was replaced by a fee-waiving policy, (iii) the required submission details include two new items: diversity considerations and agreement for open access publication of slides, codes, data and video recordings, (iv) the evaluation criteria (see below) are announced.
We recruited a review committee of 19 members, including the 8 tutorial chairs and 11 external members selected for their large understanding of the NLP domain and a good experience in reviewing and/or tutorial teaching:
Review Committee
- Timothy Baldwin (University of Melbourne, Australia) - AACL-IJCNLP 2020 tutorial chair
- Daniel Beck (University of Melbourne, Australia) - COLING 2020 tutorial chair
- Emily M. Bender (University of Washington, WA, USA)
- Erik Cambria (Nanyang Technological University, Singapore)
- Gaël Dias (University of Caen Normandie, France)
- Stefan Evert (Friedrich-Alexander-Universität Erlangen-Nürnberg, Germany)
- Yang Liu (Tsinghua University, Beijing, China)
- Agata Savary (University of Tours, France) - ACL 2020 tutorial chair
- João Sedoc (Johns Hopkins University, Baltimore, MD, USA)
- Lucia Specia (Sheffield University, UK) - COLING 2020 tutorial chair
- Xu Sun (Peking University, China)
- Yulia Tsvetkov (Carnegie Mellon University, Pittsburgh, PA, USA)
- Benjamin Van Durme (Johns Hopkins University, Baltimore, MD, USA) - EMNLP 2020 tutorial chair
- Aline Villavicencio (University of Sheffield, UK and Federal University of Rio Grande do Sul, Brazil) - EMNLP 2020 tutorial chair
- Taro Watanabe (Google, Inc., Tokyo, Japan)
- Aaron Steven White (University of Rochester, NY, USA)
- Fei Xia (University of Washington, WA, USA) - AACL-IJCNLP 2020 tutorial chair
- Yue Zhang (Westlake University, Hangzhou, China) - ACL 2020 tutorial chair
- Meishan Zhang (Tianjin University, China)
In total, we received 43 submissions for the 4 conferences. Each reviewer was assigned 6-7 proposals and each proposal received 3 reviews. The selection criteria included: clarity and preparedness, novelty or timely character of the topic, lecturers' experience, likely audience interest, open access of the teaching material, diversity aspects (multilingualism, gender, age and country of the lecturers), and compatibility with the preferred venues. We accepted 31 proposals.
The decision making was handled via an online meeting of the 8 tutorial chairs. In particular, the selection of tutorials for each conference was done via the expression of interest of the tutorial chairs on a round-robin basis. Some slight adjustments were also performed after the meeting to better fit the authors' preferences. In total, 8, 8, 8 and 7 proposals were selected for ACL, AACL-IJCNLP, COLING and EMNLP, respectively. Upon the announcement the results, 2 of the proposals accepted for AACL-IJCNLP were withdrawn.
The submission, review, selection and collection of final descriptions for all tutorials was handled via a dedicated SoftConf space, shared by the 4 coordinating conferences. After the selection of proposals, a separate track was created on SoftConf for each conference. The final submission page (one per conference) was set up so as to collect all the necessary data including notably: the tutorial slides, URLs for course material (if any), printable material (if any) and agreement for open access publication.
Tutorials Selected for ACL 2020
The final selection for ACL 2020 consists of the following 8 tutorials of 3 hours each (each of them had ACL as the preferred or the second preferred venue):
T1: Interpretability and Analysis in Neural NLP (cutting-edge)
Yonatan Belinkov, Sebastian Gehrmann and Ellie Pavlick
While deep learning has transformed the NLP field and impacted the larger computational linguistics community, the rise of neural networks is stained by their opaque nature: It is challenging to interpret the inner workings of neural network models, and explicate their behavior. Therefore, in the last few years, an increasingly large body of work has been devoted to the analysis and interpretation of neural network models in NLP.
This body of work is so far lacking a common framework and methodology. Moreover, approaching the analysis of modern neural networks can be difficult for newcomers to the field. This tutorial aims to fill this gap and introduce the nascent field of interpretability and analysis of neural networks in NLP.
The tutorial covers the main lines of analysis work, such as probing classifier, behavior studies and test suites, psycholinguistic methods, visualizations, adversarial examples, and other methods. We highlight not only the most commonly applied analysis methods, but also the specific limitations and shortcomings of current approaches, in order to inform participants where to focus future efforts.
T2: Multi-modal Information Extraction from Text, Semi-structured, and Tabular Data on the Web (cutting-edge)
Xin Luna Dong, Hannaneh Hajishirzi, Colin Lockard and Prashant Shiralkar
The World Wide Web contains vast quantities of textual information in several forms: unstructured text, template-based semi-structured webpages (which present data in key-value pairs and lists), and tables. Methods for extracting information from these sources and converting it to a structured form have been a target of research from the natural language processing (NLP), data mining, and database communities. While these researchers have largely separated extraction from web data into different problems based on the modality of the data, they have faced similar problems such as learning with limited labeled data, defining (or avoiding defining) ontologies, making use of prior knowledge, and scaling solutions to deal with the size of the Web.
In this tutorial we take a holistic view toward information extraction, exploring the commonalities in the challenges and solutions developed to address these different forms of text. We will explore the approaches targeted at unstructured text that largely rely on learning syntactic or semantic textual patterns, approaches targeted at semi-structured documents that learn to identify structural patterns in the template, and approaches targeting web tables which rely heavily on entity linking and type information.
While these different data modalities have largely been considered separately in the past, recent research has started taking a more inclusive approach toward textual extraction, in which the multiple signals offered by textual, layout, and visual clues are combined into a single extraction model made possible by new deep learning approaches. At the same time, trends within purely textual extraction have shifted toward full-document understanding rather than considering sentences as independent units. With this in mind, it is worth considering the information extraction problem as a whole to motivate solutions that harness textual semantics along with visual and semi-structured layout information. We will discuss these approaches and suggest avenues for future work.
T3: Reviewing Natural Language Processing Research (introductory)
Kevin Cohen, Karën Fort, Margot Mieskes and Aurélie Névéol
As the demand for reviewing grows, so must the pool of reviewers. As the survey presented by Graham Neubig at the 2019 ACL showed, a considerable number of reviewers are junior researchers, who might lack the experience and expertise necessary for high-quality reviews. Some of them might not have the environment or lack opportunities that allow them to learn the skills necessary. A tutorial on reviewing for the NLP community might increase reviewers’ confidence, as well as the quality of the reviews. This introductory tutorial will cover the goals, processes, and evaluation of reviewing research papers in natural language processing.
T4: Stylized Text Generation: Approaches and Applications (cutting-edge)
Lili Mou and Olga Vechtomova
Text generation has played an important role in various applications of natural language processing (NLP), and kn recent studies, researchers are paying increasing attention to modeling and manipulating the style of the generation text, which we call stylized text generation. In this tutorial, we will provide a comprehensive literature review in this direction. We start from the definition of style and different settings of stylized text generation, illustrated with various applications. Then, we present different settings of stylized generation, such as parallel supervised, style label-supervised, and unsupervised. In each setting, we delve deep into machine learning methods, including embedding learning techniques to represent style}, adversarial learning and reinforcement learning with cycle consistency to match content but to distinguish different styles. We also introduce current approaches of evaluating stylized text generation systems. We conclude our tutorial by presenting the challenges of stylized text generation and discussing future directions, such as small-data training, non-categorical style modeling, and a generalized scope of style transfer (e.g., controlling the syntax as a style).
T5: Achieving Common Ground in Multi-modal Dialogue (cutting-edge)
Malihe Alikhani and Matthew Stone
All communication aims at achieving common ground (grounding): interlocutors can work together effectively only with mutual beliefs about what the state of the world is, about what their goals are, and about how they plan to make their goals a reality. Computational dialogue research offers some classic results on grouding, which unfortunately offer scant guidance to the design of grounding modules and behaviors in cutting-edge systems. In this tutorial, we focus on three main topic areas: 1) grounding in human-human communication; 2) grounding in dialogue systems; and 3) grounding in multi-modal interactive systems, including image-oriented conversations and human-robot interactions. We highlight a number of achievements of recent computational research in coordinating complex content, show how these results lead to rich and challenging opportunities for doing grounding in more flexible and powerful ways, and canvass relevant insights from the literature on human--human conversation. We expect that the tutorial will be of interest to researchers in dialogue systems, computational semantics and cognitive modeling, and hope that it will catalyze research and system building that more directly explores the creative, strategic ways conversational agents might be able to seek and offer evidence about their understanding of their interlocutors.
T6: Commonsense Reasoning for Natural Language Processing (introductory)
Maarten Sap, Vered Shwartz, Antoine Bosselut, Dan Roth and Yejin Choi
In our tutorial, we (1) outline the various types of commonsense (e.g., physical, social), and (2) discuss techniques to gather and represent commonsense knowledge, while highlighting the challenges specific to this type of knowledge (e.g., reporting bias). We will then (3) discuss the types of commonsense knowledge captured by modern NLP systems (e.g., large pretrained language models), and (4) present ways to measure systems' commonsense reasoning abilities. We finish with (5) a discussion of various ways in which commonsense reasoning can be used to improve performance on NLP tasks, exemplified by an (6) interactive session on integrating commonsense into a downstream task.
T7: Integrating Ethics into the NLP Curriculum (introductory)
Emily M. Bender, Dirk Hovy and Alexandra Schofield
Our goal in this tutorial is to empower NLP researchers and practitioners with tools and resources to teach others about how to ethically apply NLP techniques. Our tutorial will present both high-level strategies for developing an ethics-oriented curriculum, based on experience and best practices, as well as specific sample exercises that can be brought to a classroom. We plan to make this a highly interactive work session culminating in a shared online resource page that pools lesson plans, assignments, exercise ideas, reading suggestions, and ideas from the attendees. We consider three primary topics with our session that frequently underlie ethical issues in NLP research: Dual use, bias and privacy.
In this setting, a key lesson is that there is no single approach to ethical NLP: each project requires thoughtful consideration about what steps can be taken to best support people affected by that project. However, we can learn (and teach) what kinds of issues to be aware of and what kinds of strategies are available for mitigating harm. To teach this process, we apply and promote interactive exercises that provide an opportunity to ideate, discuss, and reflect. We plan to facilitate this in a way that encourages positive discussion, emphasizing the creation of ideas for the future instead of negative opinions of previous work.
T8: Recent Advances in Open-Domain Question Answering (cutting-edge)
Danqi Chen and Scott Wen-tau Yih
Open-domain (textual) question answering (QA), the task of finding answers to open-domain questions by searching a large collection of documents, has been a long-standing problem in NLP, information retrieval (IR) and related fields (Voorhees et al., 1999; Moldovan et al., 2000; Brill et al.,2002; Ferrucci et al., 2010). Traditional QA systems were usually constructed as a pipeline, consisting of many different components such as question processing, document/passage retrieval and answer processing. With the rapid development of neural reading comprehension (Chen, 2018), modern open-domain QA systems have been restructured by combining traditional IR techniques and neural reading comprehension models (Chen et al., 2017; Yang et al., 2019) or even implemented in a fully end-to-end fashion (Lee et al., 2019; Seo et al., 2019). While the system architecture has been drastically simplified, two technical challenges remain critical:(1) “Retriever”: finding documents that (might)contain an answer from a large collection of documents; (2) “Reader”: finding the answer in a given paragraph or a document.
In this tutorial, we aim to provide a comprehensive and coherent overview of recent advances in this line of research. We will start by first giving a brief historical background of open-domain question answering, discussing the basic setup and core technical challenges of the research problem.The focus will then shift to modern techniques and resources proposed for open-domain QA, including the basics of latest neural reading comprehension systems, new datasets and models. The scope will also be broadened to cover the information retrieval component on how to effectively identify passages relevant to the questions. Moreover, in-depth discussions will be given on the use of traditional / neural IR modules, as well as the trade-offs between modular design and end-to-end training. If time permits, we also plan to discuss some hybrid approaches for answering questions using both text and large knowledge bases (e.g. (Sun et al., 2018)) and give a critical review on how structured data complements the information from unstructured text.
At the end of our tutorial, we will discuss some important questions, including (1) How much progress have we made compared to the QA systems developed in the last decade?(2) What are the main challenges and limitations of cur-rent approaches? (3) How to trade off the efficiency (computational time and memory requirements) and accuracy in the deep learning era? We hope that our tutorial will not only serve as a useful resource for the audience to efficiently acquire the up-to-date knowledge, but also provide new perspectives to stimulate the advances of open-domain QA research in the next phase.
Virtual Conference Organization
In April the whole conference was decided to be held in virtual. We carried on the preparations jointly with the tutorial instructors, the virtual infrastructure chairs, the website chairs and the SlidesLive team to set up the virtual conference infrastructure, and to publicize it for each tutorial.
We first made a decision on the time slots for tutorial presentations. Five major time zones were considered, including US west coast, US east coast, central Europe, China/Asia and Australia (India was also considered). Our aim was to offer at least two convenient time slots (i.e. within regular working hours) for each of these zones.
The final time slots include (Jul 5 Pacific Time):
- slot 1 – 7:00 PM (-1D) to 10:30 PM (-1D)
- slot 2 – 12:00 AM to 3:30 AM
- slot 3 – 6:00 AM to 9:30 AM
- slot 4 – 10:30 AM to 2:00 PM
- slot 5 – 3:00 PM to 6:30 PM
We asked the tutorial teachers to indicate their preferences among the time slots, and – if possible – to agree to give their tutorial twice, so as to increase accessibility. Our final decisions on slot assignments traded off the teacher preferences and slot availability. In the end, the assignments were the following (see also the blog post and online schedule):
- T1 – twice on slots 3 and 4
- T2 – once on slot 5
- T3 – once on slot 4
- T4 – once on slot 4
- T5 – twice on slots 3 and 5
- T6 – once on slot 5
- T7 – twice on slots 3 and 4
- T8 – once on slot 5
Unfortunately, the tutorial teachers proved hardly available for slots 1 and 2.
In addition to time slot arrangements, we gave the tutorial teacher two options on presentation, namely a pre-recorded and a fully interactive form of tutorials. The former allows the teachers to pre-record the main content of their tutorials, and use the time slots mainly for question answering and detailed discussions. The latter is more interactive, the teachers use the online sessions for both giving the lecture and other activities such as active discussions. All these arrangements were made for taking advantage of the virtual conference. In the end the choices were the following:
- T1 – pre-recorded
- T2 – interactive (the initial decision was pre-recorded but a change was made in June)
- T3 – pre-recorded
- T4 – pre-recorded
- T5 – pre-recorded
- T6 – interactive
- T7 – interactive
- T8 – interactive
After making decisions on time slots and teaching forms, we started to work on the technical details of the virtual conference infrastructure jointly with the infrastructure chairs, the website chairs, the SlidesLive support team and individual tutorial teachers. We collected needs from each tutorial from time to time, inquired information from the SlidesLive team, and tried to accommodate specific requirement for each tutorial. Since this is the first time ACL was virtual, it typically took several rounds of information exchange for making a certain technical feature work. Major things discussed included:
- Sorting out how to pre-record a tutorial using the system
- Recording of interactive sessions
- Additional RocketChat channel
- Virtual tutorial webpage setup
- Coordinators for tutorial sessions and SlidesLive team participation
- Rehearsal of interactive sessions
- Captioning of pre-recorded and interactive tutorials
- Privacy issues
- Special needs from particular tutorials
- Integrating Dory for question answering (in T1)
- Fully interactive sessions, with occasional split into break-out rooms (in T7)
For the technical details about the SlidesLive infrastructure and detailed planning of each live session, the teachers were directly in contact with the SlidesLive team. Two online demos were made available by SlidesLive: about using Zoom for QA sessions and about how SlidesLive coordinates a live sessions behind the scene. SlidesLive also organized a dry run of the live session with the teachers of each tutorial.
To ensure smooth running of the sessions, we sent two rounds of reminders to each tutorial to ensure that each teacher is registered to the conference. We collected rough timelines of each tutorial so that the SlidesLive team can work closely during the live sessions. To further help the teachers prepare their live session content we additionally collected the lists of registered attendees twice before the conference, and shared them with the teachers.
In the end, the numbers of registrations for each tutorial were:
- T1 – 1762
- T2 – 793
- T3 – 503
- T4 – 571
- T5 – 641
- T6 – 1288
- T7 – 378
- T8 – 1087
This sums up to 7023 registrations.
Archiving and Publicity
There are three main channels for publicity, namely the static website, the website of the virtual conference (restricted to the registered participants) and the social media (including Twitter). For the former, we collected photos from each instructor, which are shown with the title and abstract of their tutorials. Following NAACL 2019, we also displayed the time slots as a right-hand-side bar on the tutorial website. Due to special arrangements of the virtual conference, we marked the time zones explicitly. In addition, we asked the tutorial teachers to provide URL links to teaching materials (e.g., at GitHub) during their final submission, and included these links to their tutorial information. For social media, we drafted two blog posts (i.e., this post and this post) for advertising the tutorials.
The archiving consists of three main items, namely the tutorial proceedings, the slides for each tutorial and the videos. For the tutorial proceedings, we collected final abstracts from the authors using the START system. We asked the instructors to sign the CC BY v4 agreement when submitting their final materials on SoftConf. For the slides, we asked the tutorial teachers to provide a first version of the slides by June 17 regardless whether they are pre-recorded or interactive. Some tutorials made further updates to their slides before ACL. For the video recordings, we worked with the ACL Anthology team discussing the technical details since this is the first time the tutorial videos are shared in the website. [Note: this is currently undergoing at the time of writing of this summary] One important aspect of for inclusiveness and accessibility are tutorial captions. With the help of SlidesLive, automatic captions generated for pre-recording were collected and sent to the teachers for edition. They should be included in the tutorial archives.
Lessons Learned
While organizing a virtual conference it is important to stress that the modalities of tutorials largely vary from those of main conference papers. Tutorials slots are long (3.5-hours each), have a larger audience and may require a high degree of interaction, depending on the topics and the teachers’ preferences. Therefore, the preparation of tutorials should be considered on an individual basis (see also this post).
It is also important to announce the individual modalities with the attendees early enough. For instance, watching pre-recordings in advance was necessary for some tutorials (since their live sessions only included question answering) and such constrains should have been known more in advance.
If services of SlidesLive or alike are used, it is crucial that the teachers understand the technical modalities (what is the added value from these services as compared to virtual meeting tools like Zoom only, how are the live sessions set up based on pre-recorded material and live interaction, how are they technically coordinated, etc.). If these technical details are unclear, some teachers may be reluctant to cooperate.
One thing to remember which we missed is to publish the reading list for each tutorial (which the teachers submit with the proposal) on the tutorial website.