2017Q3 Reports: TACL Journal Editor

From Admin Wiki
Revision as of 02:39, 1 August 2017 by LillianLee (talk | contribs) (→‎Personnel updates: fixed missing paren)
Jump to navigation Jump to search


We sincerely thank ACL Secretary Shiqi Zhao for granting us an extension on July 2nd for a new reporting deadline of Tuesday July 25th due to circumstances beyond our control. -- the TACL Editors-in-Chief

TACL issues that are really community-wide

Pre-print servers and double-blind review

The issues are complicated, and it is too early for TACL to take a definitive stance.

Nonetheless, we have made some inquiries to our arxiv connections (it is conveniently hosted at the same institution as one of the EiCs) about the possibility of it implementing anonymous submissions. Details are not currently in a publicly shareable state, though, and the arxiv is, although by far the most popular, not the only preprint distribution mechanism, so we defer public discussion of that option.

Given this, options the TACL EiCs have been talking about among ourselves and are just about to gather TACL AE thoughts on are as follows. The first suggestion is new, but the options below are not ordered.

  1. Allow preprints during review, but make the author choice to do or not do so salient, by requiring a suffix to submission and camera-ready paper titles (or to the abstract, or as keywords), say, “PADR” or “noPADR”. indicating whether a preprint was available during review. In this way, future readers or citers of the paper would be able to see whether it was accepted as "truly" double-blind or perhaps potentially effectively single-blind. This mechanism may provide some potential alternate advantage (?) or recognition to authors choosing not to make preprints available during review.
  2. Ban the posting of preprints during review
  3. The status quo

Checking for dual submissions

"Illegal" dual submissions occur, and the (significant) overhead for checking for them each month falls not just on TACL, but also on the program chairs for ACL, EACL, NAACL, EMNLP, and so on for nine months after the conference receives its submissions. It would be excellent if near-duplicate checking could be centralized and automated, perhaps by integrating Regina Barzilay and Min-Yen Kan's ACL 2017 duplicate-detection software into SoftConf? (Informal conversation with the arxiv also suggest that we be granted access to their near-duplicate-detection API.)

Is TACL (not) a conference?

Should TACL be more part of the conference eco-system or more distinct from it? One issue that throws this question into sharp relief is the following. TACL currently allows submission of (revised) conference rejections if the revised version is significantly different from the rejected version. However, judging "significantly difficult" is proving significantly difficult for the EiCs to determine, who collectively spend 2-3 days in making each such judgment.

If TACL should be an distinct channel, where people chose to either go "the conference route" or "the journal route", then TACL should move towards not allowing any conference-rejection resubmissions. On the other hand, if TACL should be more or less be considered another conference, then it should be more relaxed about accepting revised rejections for review.


TACL workload is part of the community's workload

Even though the TACL arrangement is meant to keep each TACL member occupied with at most one paper at a time, reviewing/service burnout from other conferences and meetings, not to speak of those conferences' submission deadlines, is, anecdotally, significantly affecting TACL, such that we are now experience a significant number of reviewers declining TACL reviewing assignments. TACL reviewers and action editors (AEs) are *ACL reviewers, program committee members, and authors, as well.

In other words, TACL may be trying to keep reviewing loads down internally, but reviewing burnout is a community-wide issue.


Some statistics

First-decision statistics

Depicted below is the history of the number of distinct submission IDs that have received a decision excluding desk rejects (in contrast to plots from prior reports) in and the average time to first decision, grouped by round (=nearest first-of-the-month). The decision time for papers submitted for a given month's is counted as starting from the first of that month.

June and July 2017 actually had 22 and 19 valid submissions, respectively, so the last data points will change as those decisions come in and are included. Also not counted: papers handled using START (the early days of TACL) and the 146 papers archived for some technical or formatting problem.

For reviewers that complete their reviews, over the past 12 months, the average completion time is 23 days, where the TACL "contract" is 21 days. Many, many thanks to these wonderful reviewers!

Time to first decision.no desk rejects.png

We provide a sense of the variance in time to first decision by re-plotting the data above on a per-individual-paper basis, where the x-axis is again the submission round (month) in which the paper was submitted. 45 days (third line from the bottom) would be our ideal. Note that one reason the variance for the last round is less than other years is because papers that are overdue from the May or June rounds are not yet in the data.

2017Q3 TACL.Time to first decision.each paper.by sub round.png



The distribution of first decisions for papers submitted for the July 2016 round or after (i.e, the most recent 12-month period), not counting papers that were resubmissions of a (c), and not counting the 20 desk rejects or the larger number of other papers ruled out for technical problems, is as follows. (Denominator = 129)

  5% (a) = accepted as is
 12% (b) = conditional accept:  acceptance guaranteed if conditions met
 50% (c) = rejected, encourage resubmission but no guarantee of acceptance
 33% (d) = rejected with 1-year moratorium on TACL submission.


Of the additional 19 decided-on papers that were resubmissions of a (c) submitted between July 2016 and now (i.e, the most recent 12-month period), the decisions were:

 26% (a)
 42% (b)
  5% (c) This is one paper: TACL highly discourages consecutive (c) decisions
 26% (d)


Publishing statistics

16 papers have been published so far in 2017. 22 are in some stage of the publication queue.


Personnel updates

Co-Editor-in-Chief Lillian Lee will be rescinding her request to serve only a half-term, asking the ACL for her second term to instead extend for the full length, until the end of 2019. (Mark Johnson and Kristina Toutanova's terms end July 2018.)

The following became Action Editors in January 2017: Colin Cherry, Jiangfeng Gao, Julia Hockenmaier, Adam Lopez, Ani Nenkova, Sebastian Pado, and Ivan Titov. Welcome aboard!

The following Action Editors have graciously agreed to stay on for an additional term starting July 2017: Hal Daume III, Eric Fosler-Lussier (until EOY 2017) for now Alexander Koller, Anna Korhonen, Marco Kuhlmann, Diana McCarthy, Daichi Mochihashi, Patrick Pantel (one year), Sebastian Riedel, Stefan Riezler, Brian Roark, Noah Smith, Mark Steedman. We are very grateful to have these experienced Action Editors back on board to help us through the upcoming challenges!

We are planning invites to new Action Editors in the near term.

We also need and plan to increase the standing reviewer pool.

Major bottlenecks, and major proposed changes as solutions

Here are some major bottlenecks beyond just the usual people missing their deadlines.

  1. Pre-review police work: checking for dual submissions, format violations, and so on. We typically have multiple cases each round.
  2. Pre-review load- and expertise-balancing of papers to action editors
  3. Post-acceptance copy-editing. Until 2017, TACL HQ (the EiCs and Editorial Assistant Cindy Robinson) had been doing all the copy-editing work. This started to introduce very significant delays (months! alas) in publication time.

TACL co-EiC Mark Johnson has summarized the situation and the possible solutions as follows.

Even though TACL has 3 Editors in Chief, the workload is enormous because of the sheer volume of submissions. Everyone is stretched thin, and the extended absence of one of Editors in Chief causes us to fall significantly behind.

Because of this, we decided to investigate whether having a professional academic publisher publish TACL would reduce the workload on the Editors in Chief and Action Editors, and improve service for the TACL community.

I contacted 3 publishers over the past few months; MIT Press, Springer and Microtome Press (run by Stuart Shieber, a Harvard professor who is an expert in open access publishing). I had extended email conversations and Skype conversations with MIT Press, Springer and Stuart Shieber (Microtome). This message is a summary of those conversations.

There are four different stages of work involved in producing TACL:

  1. Submission/review workflow management (handling paper submissions, routing them to reviewers, etc.)
  2. Editorial review (handling accept/revise/rejection decisions, etc.)
  3. Production (copy-editing, formatting camera-ready copy)
  4. Distribution (ACL anthology, a publisher might also sell TACL to libraries).

Academic publishers are primarily set up to handle distribution, but they can also handle some parts of production, largely by outsourcing copy-editing to freelancers. (There's a quote from MIT Press for this).

The actual editorial review needs to be handled by experts who know the area, so there is not much a 3rd party company can do to help us here. Stu Shieber's suggestion is to build a hierarchy (specifically, to add an additional level of Area Editors, corresponding to Area Chairs in a conference) and push as much work as possible down to lower levels in the hierarchy. There are several commercial firms that handle the submission workflow. Stu Shieber mentioned the following: Open Journal Systems (we currently use OJS; Stu points out PKP (at Simon Fraser U.) offers hosting of OJS <https://pkpservices.sfu.ca/content/journal-hosting>); Editorial Manager (this is also what MIT Press and Springer suggested); ScholasticaHQ; ScienceAI; Ubiquity Press Editorial Manager, ScholasticaHQ and ScienceAI are all hosted services (as is OJS hosted by PKP), while Ubiquity Press offers more service (for a price). They should all be easier to use and more robust than our current setup (which is still running on a Columbia site). An issue is that Lillian has highly customised the OJS software for TACL; I suspect this won't be sustainable over the longer term.

All the publishers I spoke to (and Stu Shieber agrees) that there should be no problem getting TACL indexed by the Science Citation Index (Thomson Reuters) and Scopus (Elsevier). Our agreement with the *ACL conferences of offering a conference presentation to authors of accepted papers does not seem to be a problem.

Here are my suggestions (largely drawn from suggestions from the TACL EiCs, Stu Shieber, etc.):


  1. We should evaluate the various hosted services for handling the submission workflow, with the goal of having someone else run the TACL submission web site. This evaluation is likely to involve a substantial amount of work, as it needs to evaluate working versions of these systems, to see how well they can support Lillian’s highly customised TACL work flow. Perhaps the ACL can find and pay an energetic CLer to try to implement the TACL work flow in each of these systems (or at least find out how hard it would be to do)?
  2. We add an additional level to hierarchy of Area Editors. The web submission software should be customised so authors nominate the area for their paper, and it is automatically routed to the appropriate Area Editor, who handles it in the same way that the EiCs currently do.
    1. The area editors perform aggressive triage of submissions. We add two new review category for papers that we decline to review, e.g., "desk reject without prejudice" for papers that violate our formatting requirements, and "inappropriate submission desk reject" for papers that we think are unlikely to be accepted.
    2. We dramatically increase the number of reviewers and action editors.
    3. Action editors are responsible for checking that the submission is acceptable, e.g., checking that all required changes have been made, that the English is acceptable, etc. They can require that a paper be revised to improve English. TACL should find one or more free-lance copy-editors that authors can use if they can't correct the English themselves (the author pays, of course).
  3. We should ask Paola Merlo (the CL editor) to describe the services that MIT Press provides for the publication of CL.
  4. We should delay getting TACL indexed until we have solutions to our submission workflow management and editorial review problems. Once TACL is indexed we can expect our submissions to jump sharply.