Difference between revisions of "2016Q3 Reports: Program Chairs"

From Admin Wiki
Jump to navigation Jump to search
 
(4 intermediate revisions by the same user not shown)
Line 1: Line 1:
 
== Innovations ==
 
== Innovations ==
  
As compared to previous ACL conferences, this year’s main innovations were:  
+
As compared to previous ACL conferences, this year's main innovations were:  
  
 
* In addition to best paper awards, a larger number of outstanding papers were selected.  The original plan was to identify roughly 1-2% of submissions; ultimately 11 papers out of 1290 submissions (0.85%) were identified as outstanding by the awards committee.
 
* In addition to best paper awards, a larger number of outstanding papers were selected.  The original plan was to identify roughly 1-2% of submissions; ultimately 11 papers out of 1290 submissions (0.85%) were identified as outstanding by the awards committee.
Line 57: Line 57:
 
<tr><td>total</td><td>825</td><td>231</td><td>463</td><td>97</td><td>1288</td><td>100.0%</td><td>328</td><td>100.0%</td><td>25.5%</td><td>11</td></tr>
 
<tr><td>total</td><td>825</td><td>231</td><td>463</td><td>97</td><td>1288</td><td>100.0%</td><td>328</td><td>100.0%</td><td>25.5%</td><td>11</td></tr>
 
</table>
 
</table>
 +
 +
The top five areas with the highest number of submissions this year were Semantics; Information Extraction, Question Answering, and Text Mining; Sentiment Analysis and Opinion Mining; Document Analysis (including text categorization, topic models, and retrieval), and Machine Translation.
 +
 +
== Review Process ==
 +
 +
The page limit was 8 pages for long paper submissions and 4 pages for short paper submissions (each with unlimited pages for references). Camera-ready versions were given one additional page: 9 plus ∞ for long papers, 5 plus ∞ for short papers. As mentioned above, authors were additionally allowed to submit an appendix of unlimited length.
 +
 +
We changed the review forms slightly. We divided the SOUNDNESS category into one category for theoretical soundness and one for empirical soundness. We also pre-structured the comment box in the review form with three headings, “strengths,” “weaknesses,” and “discussion.”  Reviewers were free to delete these headings when authoring their reviews.
 +
 +
Reviewer load balancing continues to be a challenge for this conference.  Reviewers were invited directly by area chairs, then asked later to fill out a survey indicating which areas they were willing to review for.  After papers were submitted and routed to areas, we used a tool provided by Mark Dredze (and used in previous ACL conferences) to improve the load balancing across areas.  This worked well for short papers, but not as well for the much larger set of long papers, and even with considerable manual corrections, there was considerable disparity in the reviewing load across areas.  We suspect the greedy algorithm implemented in this tool is partly to blame, but cannot rule out the possibility that we just needed more reviewers or more flexible reviewers to achieve balance.  We expect that reviewer recruiting and assignment will continue to be a challenge as ACL grows.
 +
 +
There was a large number of papers for which area chairs had conflicts of interest, about 5-10% of submissions, including some papers where one of the co-PCs had a conflict. While the ACL policy at that time stated that “The identity of the reviewers of a paper shall be withheld from  all people who have a conflict of interest in that paper,” we found that this was not doable for this large number of papers. Together with the ACL 2016 coordinating committee, we decided on the following procedure for ACL 2016:
 +
 +
* As the START tool allows ACL PC co-chairs to see all papers and all reviewers, PC co-chairs did not submit to the conference, but they do handle most aspects of the process for all papers that were submitted to the conference.  When one PC co-chair had a COI with a paper, the other co-chair made final decisions about acceptance and presentation format.
 +
* Papers co-authored by area chairs are assigned to areas other than the one that they chair.
 +
* If an area chair has a COI with a paper (but is not an author on it), the other area chairs of the same area handle the paper, but the paper remains in the area. START supports this by making such papers invisible to the area chair with the COI. (Note though that there were some glitches where papers were not assigned meta-reviews because of visibility issues.  These were handled informally when discovered.)
 +
* If all area chairs of an area have a COI with the same paper, the paper is re-assigned to a different area.  The “Other” area was a reasonable last resort.
 +
 +
As mentioned above, area chairs were asked to submit a meta-review (a functionality that is supported by START) for all papers that were not clear rejects. We found this to be extremely helpful in deciding on borderline papers.
 +
 +
== Outstanding and Best Papers ==
 +
 +
The area chairs nominated 18 papers from 13 areas for recognition as outstanding papers.  Out of these, the paper awards committee selected 9 outstanding long papers, of which one was awarded “best paper” and another “best student paper.”  For short papers, 4 papers were nominated and 2 selected as outstanding.  The paper awards committee was formed before the paper deadlines, and papers were given to the committee in their originally submitted, anonymized form.  Reviews and meta-reviews of the nominated papers were also provided to the committee on request.
 +
 +
== Presentations ==
 +
 +
The oral presentations are arranged in up to seven parallel sessions. There are two large poster sessions including dinner on two evenings of the conference. We manually grouped the papers into sessions, relying more on the topic of each paper than its area.  In the end, only 10 out of 44 oral sessions were made up of papers all reviewed in the same area.
 +
 +
Based on recommendations of past program chairs, we aimed to have at least 11 m2 available for every poster presented in the poster sessions, to make the space comfortable and easy to move in.  Additional space was booked at the poster session venue to allow for this; prior to the conference we estimated that more than 12 m2 would be available for every poster at the busier Monday night poster session.
 +
 +
== Timeline ==
 +
The timeline this year was complicated to plan because of pressure to keep the deadlines for ACL far away from the deadlines for NAACL and EMNLP, to allow for serial submission of rejected papers.
 +
 +
The complete timeline is given below.
 +
 +
* 29 Feb 2016:  short paper submission deadline
 +
* 1-3 Mar 2016:  program co-chairs assign short papers to areas [3 days]
 +
* 4-7 Mar 2016:  short paper bidding period [4 days]
 +
* 8-10 Mar 2016:  area chairs assign short papers to reviewers [3 days]
 +
* 11-27 Mar 2016:  short paper review period  [17 days]
 +
* 28-31 Mar 2016:  short paper reviewer discussion [4 days]
 +
* 1-8 Apr 2016:  short paper recommendations and meta-reviews [8 days]
 +
* 9-15 Apr 2016:  program co-chairs finalize short paper decisions [7 days]
 +
* 19 May 2016:  short paper camera-ready deadline [34 days]
 +
 +
 +
* 18 Mar 2016:  long paper submission deadline
 +
* 19-24 Mar 2016:  program co-chairs assign long papers to areas [6 days]
 +
* 25 Mar-1 Apr 2016:  long paper bidding period [8 days]
 +
* 2-5 April 2016:  area chairs assign long papers to reviewers [4 days]
 +
* 6-27 Apr 2016:  long paper review period  [22 days]
 +
* 28 Apr-1 May 2016:  long paper author response period [4 days]
 +
* 2-6 May 2016:  long paper reviewer discussion [5 days]
 +
* 7-12 May 2016:  long paper recommendations and meta-reviews [6 days]
 +
* 13-24 May 2016:  program co-chairs finalize long paper decisions [12 days]
 +
* 7 June 2016:  long paper camera-ready deadline [14 days]
 +
 +
This timeline left 60 days between the last camera-ready deadline and the first day of the conference (August 7).
 +
 +
== Recommendations ==
 +
 +
1. A number of authors had problems obtaining visas in time to be able to come to ACL. To obtain a visa, authors need an invitation letter from the conference, which (in this year’s case) can be issued only after the authors have registered for the conference. The time necessary to obtain visas should be factored into the planned timeline, such that registration will open early enough, and decisions announced early enough, for authors to obtain their visas and plan travel.
 +
 +
2. Many reviews were late. At the time that author response started, one third of the papers had at least one review missing, and some papers had all three reviews missing.  We recommend leaving a few extra days between the end of reviewing and the start of author response, and starting some way of passing information about delinquent reviewers forward from conference to conference.
 +
 +
3. As discussed above, the reviewer load balancing task needs a more principled solution so that enough reviewers are recruited in advance of the deadlines and so that load balancing is handled smoothly with a good outcome.
 +
 +
4. Switching short and long paper reviewing periods was a success.  The program committee found it very useful to have a “warmup” with the short papers (which are faster to review and fewer in number) before handling the long papers.  We also noticed that the number of short papers went down (lowest in four years and 25% lower than the previous three years’ average), while long paper submissions were about the same as recent years (3% gain over previous three years’ average).  It might be that the earlier short paper deadline discourages short paper submissions, though the overlap in review periods with NAACL (for short papers) makes it hard to know for sure.  The acceptance rate for short papers was also a bit lower than in recent years, but we don’t have any way of knowing if this is the result of the different timeline.

Latest revision as of 07:23, 13 July 2016

Innovations

As compared to previous ACL conferences, this year's main innovations were:

  • In addition to best paper awards, a larger number of outstanding papers were selected. The original plan was to identify roughly 1-2% of submissions; ultimately 11 papers out of 1290 submissions (0.85%) were identified as outstanding by the awards committee.
  • Instead of 20-minute talks, long papers were presented in 15-minute talks, plus 5 minutes for questions and changing speakers. Short papers were given 12 minutes plus 4 for questions and transition.
  • Authors were allowed to submit an appendix of any length with their papers. Reviewers were not required to read appendices.
  • Area chairs provided meta-reviews for all papers that were not obvious rejections.
  • The deadline for short papers was ahead of the deadline for long papers.
  • We introduced a new area labeled “Other” to handle papers that did not fit into traditional areas, as well as a few COI papers.

Rationale

Outstanding papers were added because the community is growing and so is the number of papers at each conference, including the number of papers that deserve recognition as being of particular importance and quality.

Appendices were added to help address the replication problem. The relevant passage in the Call for Papers reads as follows:

  • ACL 2016 also encourages the submission of supplementary material to report preprocessing decisions, model parameters, and other details necessary for the replication of the experiments reported in the paper. Seemingly small preprocessing decisions can sometimes make a large difference in performance, so it is crucial to record such decisions to precisely characterize state-of-the-art methods.
  • Nonetheless, supplementary material should be supplementary (rather than central) to the paper. It may include explanations or details of proofs or derivations that do not fit into the paper, lists of features or feature templates, sample inputs and outputs for a system, pseudo-code or source code, and data. The paper should not rely on the supplementary material: while the paper may refer to and cite the supplementary material and the supplementary material will be available to reviewers, they will not be asked to review or even download the supplementary material. Authors should refer to the contents of the supplementary material in the paper submission, so that reviewers interested in these supplementary details will know where to look.

Meta-reviews were strongly recommended by the NAACL 2015 program co-chairs.

The deadline for short papers was moved ahead of the deadline for long papers to coordinate deadlines with NAACL, such that rejected NAACL submissions could be reworked into ACL submissions.

The “Other” area was introduced to give papers in non-traditional research areas a better chance at fair review. Rather than conventional bidding, the “Other” area chairs were given access to the reviewer pool directly (while bidding was taking place for papers in the rest of the areas), so that these papers were given the first chance at the most appropriate reviewers.

Submissions and Presentations

ACL 2016 received a total of 1288 valid submissions, of which 825 were long papers and 463 were short papers. 21 long papers and 9 short papers were rejected without review due to non-anonymity or formatting issues. The remaining submissions were each assigned to one of 19 areas, and managed by a program committee of 38 area chairs and 884 reviewers (including secondary reviewers indicated on the review forms). 231 (28%) of the 825 qualifying long papers and 97 (21%) of the 463 qualifying short papers were selected for presentation at the conference. Of the accepted long papers, 116 were selected for oral presentation, and 115 for poster presentation. Of the accepted short papers, 49 have oral and 48 have poster presentations. The oral versus poster decision was made based on the recommendations of reviewers, which we took as a noisy signal of the intended audience’s preference of format for each paper.

In addition, 25 TACL papers will be presented at ACL – 24 as talks and one as a poster. Including TACL papers, there will be 189 oral and 163 poster presentations at the main ACL conference. The table below shows the number of reviewed submissions in each area for long and short papers, as well as the number of papers accepted in each area. Approximately 59 short and 52 long papers were withdrawn before review was completed; these are not included in the table.


arealong reviewedlong acceptedshort reviewedshort acceptedtotal submissionspercentage of total submissionstotal acceptedpercentage of total acceptedarea acceptance rateoutstanding papers
Semantics11443661318014.0%5617.1%31.1%3
Information Extraction, Question Answering, and Text Mining1222748917013.2%3611.0%21.2%1
Sentiment Analysis and Opinion Mining7792831058.2%123.7%11.4%
Document Analysis53134981027.9%216.4%20.6%
Machine Translation5815369947.3%247.3%25.5%1
Tagging, Chunking, Syntax, and Parsing4818339816.3%278.2%33.3%2
Social Media396308695.4%144.3%20.3%
Machine Learning4615226685.3%216.4%30.9%1
Resources and Evaluation4417216655.0%237.0%35.4%
Other3410275614.7%154.6%24.6%
Discourse and Pragmatics4215184604.7%195.8%31.7%
Summarization295192483.7%72.1%14.6%
Multilinguality196245433.3%113.4%25.6%
Phonology, Morphology, and Word Segmentation236104332.6%103.0%30.3%1
Generation20893292.3%113.4%37.9%
Dialog and Interactive Systems20780282.2%72.1%25.0%1
Cognitive Modeling and Psycholinguistics17781251.9%82.4%32.0%1
Vision, Robots, and Grounding14362201.6%51.5%25.0%
Speech611070.5%10.3%14.3%
total825231463971288100.0%328100.0%25.5%11

The top five areas with the highest number of submissions this year were Semantics; Information Extraction, Question Answering, and Text Mining; Sentiment Analysis and Opinion Mining; Document Analysis (including text categorization, topic models, and retrieval), and Machine Translation.

Review Process

The page limit was 8 pages for long paper submissions and 4 pages for short paper submissions (each with unlimited pages for references). Camera-ready versions were given one additional page: 9 plus ∞ for long papers, 5 plus ∞ for short papers. As mentioned above, authors were additionally allowed to submit an appendix of unlimited length.

We changed the review forms slightly. We divided the SOUNDNESS category into one category for theoretical soundness and one for empirical soundness. We also pre-structured the comment box in the review form with three headings, “strengths,” “weaknesses,” and “discussion.” Reviewers were free to delete these headings when authoring their reviews.

Reviewer load balancing continues to be a challenge for this conference. Reviewers were invited directly by area chairs, then asked later to fill out a survey indicating which areas they were willing to review for. After papers were submitted and routed to areas, we used a tool provided by Mark Dredze (and used in previous ACL conferences) to improve the load balancing across areas. This worked well for short papers, but not as well for the much larger set of long papers, and even with considerable manual corrections, there was considerable disparity in the reviewing load across areas. We suspect the greedy algorithm implemented in this tool is partly to blame, but cannot rule out the possibility that we just needed more reviewers or more flexible reviewers to achieve balance. We expect that reviewer recruiting and assignment will continue to be a challenge as ACL grows.

There was a large number of papers for which area chairs had conflicts of interest, about 5-10% of submissions, including some papers where one of the co-PCs had a conflict. While the ACL policy at that time stated that “The identity of the reviewers of a paper shall be withheld from all people who have a conflict of interest in that paper,” we found that this was not doable for this large number of papers. Together with the ACL 2016 coordinating committee, we decided on the following procedure for ACL 2016:

  • As the START tool allows ACL PC co-chairs to see all papers and all reviewers, PC co-chairs did not submit to the conference, but they do handle most aspects of the process for all papers that were submitted to the conference. When one PC co-chair had a COI with a paper, the other co-chair made final decisions about acceptance and presentation format.
  • Papers co-authored by area chairs are assigned to areas other than the one that they chair.
  • If an area chair has a COI with a paper (but is not an author on it), the other area chairs of the same area handle the paper, but the paper remains in the area. START supports this by making such papers invisible to the area chair with the COI. (Note though that there were some glitches where papers were not assigned meta-reviews because of visibility issues. These were handled informally when discovered.)
  • If all area chairs of an area have a COI with the same paper, the paper is re-assigned to a different area. The “Other” area was a reasonable last resort.

As mentioned above, area chairs were asked to submit a meta-review (a functionality that is supported by START) for all papers that were not clear rejects. We found this to be extremely helpful in deciding on borderline papers.

Outstanding and Best Papers

The area chairs nominated 18 papers from 13 areas for recognition as outstanding papers. Out of these, the paper awards committee selected 9 outstanding long papers, of which one was awarded “best paper” and another “best student paper.” For short papers, 4 papers were nominated and 2 selected as outstanding. The paper awards committee was formed before the paper deadlines, and papers were given to the committee in their originally submitted, anonymized form. Reviews and meta-reviews of the nominated papers were also provided to the committee on request.

Presentations

The oral presentations are arranged in up to seven parallel sessions. There are two large poster sessions including dinner on two evenings of the conference. We manually grouped the papers into sessions, relying more on the topic of each paper than its area. In the end, only 10 out of 44 oral sessions were made up of papers all reviewed in the same area.

Based on recommendations of past program chairs, we aimed to have at least 11 m2 available for every poster presented in the poster sessions, to make the space comfortable and easy to move in. Additional space was booked at the poster session venue to allow for this; prior to the conference we estimated that more than 12 m2 would be available for every poster at the busier Monday night poster session.

Timeline

The timeline this year was complicated to plan because of pressure to keep the deadlines for ACL far away from the deadlines for NAACL and EMNLP, to allow for serial submission of rejected papers.

The complete timeline is given below.

  • 29 Feb 2016: short paper submission deadline
  • 1-3 Mar 2016: program co-chairs assign short papers to areas [3 days]
  • 4-7 Mar 2016: short paper bidding period [4 days]
  • 8-10 Mar 2016: area chairs assign short papers to reviewers [3 days]
  • 11-27 Mar 2016: short paper review period [17 days]
  • 28-31 Mar 2016: short paper reviewer discussion [4 days]
  • 1-8 Apr 2016: short paper recommendations and meta-reviews [8 days]
  • 9-15 Apr 2016: program co-chairs finalize short paper decisions [7 days]
  • 19 May 2016: short paper camera-ready deadline [34 days]


  • 18 Mar 2016: long paper submission deadline
  • 19-24 Mar 2016: program co-chairs assign long papers to areas [6 days]
  • 25 Mar-1 Apr 2016: long paper bidding period [8 days]
  • 2-5 April 2016: area chairs assign long papers to reviewers [4 days]
  • 6-27 Apr 2016: long paper review period [22 days]
  • 28 Apr-1 May 2016: long paper author response period [4 days]
  • 2-6 May 2016: long paper reviewer discussion [5 days]
  • 7-12 May 2016: long paper recommendations and meta-reviews [6 days]
  • 13-24 May 2016: program co-chairs finalize long paper decisions [12 days]
  • 7 June 2016: long paper camera-ready deadline [14 days]

This timeline left 60 days between the last camera-ready deadline and the first day of the conference (August 7).

Recommendations

1. A number of authors had problems obtaining visas in time to be able to come to ACL. To obtain a visa, authors need an invitation letter from the conference, which (in this year’s case) can be issued only after the authors have registered for the conference. The time necessary to obtain visas should be factored into the planned timeline, such that registration will open early enough, and decisions announced early enough, for authors to obtain their visas and plan travel.

2. Many reviews were late. At the time that author response started, one third of the papers had at least one review missing, and some papers had all three reviews missing. We recommend leaving a few extra days between the end of reviewing and the start of author response, and starting some way of passing information about delinquent reviewers forward from conference to conference.

3. As discussed above, the reviewer load balancing task needs a more principled solution so that enough reviewers are recruited in advance of the deadlines and so that load balancing is handled smoothly with a good outcome.

4. Switching short and long paper reviewing periods was a success. The program committee found it very useful to have a “warmup” with the short papers (which are faster to review and fewer in number) before handling the long papers. We also noticed that the number of short papers went down (lowest in four years and 25% lower than the previous three years’ average), while long paper submissions were about the same as recent years (3% gain over previous three years’ average). It might be that the earlier short paper deadline discourages short paper submissions, though the overlap in review periods with NAACL (for short papers) makes it hard to know for sure. The acceptance rate for short papers was also a bit lower than in recent years, but we don’t have any way of knowing if this is the result of the different timeline.