Difference between revisions of "2009Q3 Reports: NAACL 2009"

From Admin Wiki
Jump to navigation Jump to search
(New page: NAACL HLT Conference Chair Report As General Chair, my goal for the 2009 NAACL HLT Conference was to have high quality technical presentations in all areas, but particularly to increase t...)
 
(Replacing page with 'This report can be found at http://www.cs.cmu.edu/~lsl/acl/NAACL-HLT-2009-report.pdf')
Line 1: Line 1:
NAACL HLT Conference Chair Report
+
This report can be found at http://www.cs.cmu.edu/~lsl/acl/NAACL-HLT-2009-report.pdf
 
 
As General Chair, my goal for the 2009 NAACL HLT Conference was to have high
 
quality technical presentations in all areas, but particularly to increase the number and
 
quality of papers from the speech and IR areas. In addition, I was encouraged by the
 
NAACL Board to explore ways to better engage researchers from industry.
 
 
 
The ACL-HLT 2008 conference chair raised the issue of imbalance in the speech/IR/NLP
 
fields. There were many more submissions in NLP than in the other two areas. She
 
recommended either changing the structure of committees to reflect the actual balance of
 
papers or having an “additional coordinated event that involves invitations to members of
 
the community in order to draw more participation.” For NAACL HLT 2009, we
 
maintained the area distribution for the area chairs, publicity and tutorials, but not for
 
other committees. For committees with fewer chairs, we tried to choose people whose
 
expertise spanned multiple disciplines, which seemed to work well. It would probably be
 
reasonable to take a similar strategy for the PC chairs next year, i.e. have only 3 PC
 
chairs with 2 from NLP and one from speech, with one of the three having a tie to IR.
 
(We had one person serve as lead for long papers (Mike Collins) and one as lead for short
 
papers (Lucy Vanderwende), and this worked very well.)
 
 
 
In order to try to attract more papers in the speech and IR areas, we included two special
 
sessions targeting the cross-cutting topics of large scale language processing (IR and
 
NLP) and speech indexing and retrieval (speech and IR). The area chairs for the speech
 
indexing session actively recruited paper submissions, which contributed to the session’s
 
success. The large scale language processing area had several submissions without active
 
recruiting other than our highlighting it in the call for papers.
 
 
 
In an attempt to better engage industry researchers, we organized a lunchtime panel
 
discussion on “Emerging Application Areas in Computational Linguistics,” which
 
included representatives from different application areas and different size companies.
 
Thanks to Bill Dolan for organizing and moderating the discussion. Box lunches were
 
available for purchase. The panel discussion was very well attended. One problem was
 
that the panel started late because of the long line for getting lunches, which involved
 
participants composing their lunches. Possibilities for improving this include having the
 
lunches pre-boxed to simplify pickup logistics and moving the lunch pickup outside of
 
the panel discussion room. In addition to the industry panel, another success with
 
industry is that the tutorial chairs actively recruited some more practically oriented
 
panels, which were very well received.
 
 
 
Two web pages were available to provide guidance:
 
http://aclweb.org/adminwiki/index.php?title=Conference_Handbook
 
http://www.cs.jhu.edu/~jason/advice/how-to-chair-a-conference
 
 
 
While I should have done a better job in making all chairs aware of the guidelines, our
 
biggest problem (coordinating schedules with other conferences) is not addressed in these
 
guidelines. In addition, it would be nice to have guidance on other HLT-specific issues
 
(e.g. demos). NAACL might want to develop an additional web page covering special
 
considerations for NAACL HLT.
 
 
 
Several changes were instituted this year, including:
 
* Multi-conference coordination of sponsorship
 
* Multi-conference coordination of workshops
 
* New format for short paper reviewing and explicit call for different types of short
 
papers
 
* Including the student research workshop as a parallel session within the main
 
conference
 
* Allowing students whose papers were accepted to the student session to also
 
(optionally) present a poster in the main conference poster session.
 
 
 
All changes seemed to work well with a couple of exceptions. Most importantly, there
 
was a lack of clarity of responsibility among the different sponsorship chairs and a lack
 
of connection of the regional sponsorship chairs to the NAACL HLT 2009 meeting. The
 
general idea of multi-conference coordination makes sense, but it is necessary to clarify
 
the role of the local chair and to make this person the main point of contact. Second, as
 
noted in the PC chairs’ report, there were not many papers in the “negative result” and
 
“opinion piece” categories of short papers. The program committee also felt that it would
 
be better to have the author indicate the type of short paper, in addition to the reviewers.
 
The total funds raised through sponsorship efforts was $27,350, which includes one
 
$4000 contribution that resulted from the multi-conference sponsorship coordination. The
 
Local Arrangements Chairs were very helpful in this effort in addition to the Local
 
Sponsorship Chair. Companies contributing included: Rosetta Stone, CNGL, Google,
 
AT&T, Language Weaver, JD Powers, IBM Research, CLEAR, HLT, LDC, and John
 
Benjamins. One difficulty that arose was a last minute decision that funds from a sponsor
 
would go to a student travel award, which caused some minor program glitches. We
 
recommend that an advance deadline be set for designating donated funds for new
 
awards.
 
 
 
 
 
Based on discussions during and after the conference, additional areas where we think
 
there could be improvements include:
 
* Parallel review of short papers: A possible reason for serial review was to allow
 
papers submitted as long papers and rejected to be resubmitted as short papers.
 
Very few short papers are accepted this way, and authors can always submit to
 
other conferences. Parallel review would make scheduling much easier and
 
would make the reviewing process less complicated for the program committee.
 
* Demos: We had very few demos this year, but many people like the tradition.
 
Ideas for addressing this include eingither assign the Demo Chairs the task of
 
actively recruiting demos, or giving authors of accepted papers the opportunity to
 
present as a demo.
 
* Poster session: The evening poster session combined with a reception was well
 
received by poster viewers, but less so by poster presenters. People who had
 
posters in the first session didn’t get much food, and people who had their posters
 
in the later session had a smaller audience. Some ideas for improving this include:
 
having a better enforced time period for poster presenters to eat, having some
 
overlap of the two time slots so the second session runs less late, grouping posters
 
that are on related topics in the same area, and including an introductory session
 
where poster presenters give a 1-minute pitch on their poster.
 
* Workshops: One of the workshop chairs should be affiliated with the local
 
(hosting) institution, since there are a lot of local arrangements issues that arise
 
with the workshops.
 
 
 
Other suggestions are included in the reports from other chairs.
 
While there were several areas for improvement, overall, I consider the conference to be
 
a success. There were roughly 700 participants, and the quality of the tutorials,
 
presentations and workshops was high. The local arrangements were terrific. I am
 
indebted to all the chairs involved in the organization and to the NAACL Board for their
 
support. While there remains an imbalance between NLP, speech and IR, I am
 
encouraged by the quality of the papers that were included. I strongly support continued
 
efforts to include these different areas of HLT and make it possible for researchers to
 
benefit from the insights of these related field.
 
 
 
Mari Ostendorf, University of Washington
 
General Chair
 
 
 
 
 
NAACL 2009 Program Chairs Report
 
 
 
In 2009 the NAACL HLT program continued to include high-quality work in the areas of
 
computational linguistics, information retrieval, and speech technology. The program included
 
full papers, short papers, demonstrations, a student research workshop, pre-conference
 
tutorials, and post-conference workshops. The call for papers included solicitation of papers
 
for 2 special sessions, “Large-Scale Language Processing” and “Speech Indexing and Retrieval”.
 
This year, 260 full papers were submitted, of which 75 papers were accepted (giving a 29%
 
acceptance rate); and 178 short papers were submitted, of which 71 were accepted (giving
 
a 40% acceptance rate). All full papers were presented as talks; this contrasts with some
 
previous years, e.g., ACL-08 HLT, where some full papers were presented as posters. Of the
 
short papers, 35 were presented as talks, with the remainder being presented as posters. A
 
full breakdown of the statistics by area is presented at the end of this report.
 
 
 
This year, short papers of five types were solicited: “a small, focused contribution”, “work in
 
progress”, “a negative result”, “an opinion piece”, or “an interesting application note”; it was
 
a reviewer task to determine which paper type a short paper best belonged to, alternatively,
 
this could be a check-box at submission time. In practice, the largest agreement among
 
reviewers was found in the “small, focused contribution” category, the traditional type of short
 
paper submitted to NAACL HLT (119/178). A majority of reviewers thought that 38/178
 
papers were “work in progress”, and that 10/180 were “interesting application note”. There
 
were only a handful of papers submitted that any of the reviewers considered to be a “negative
 
result” or “opinion piece”. It will take more than one conference cycle to determine the field’s
 
interest in writing, and then accepting, such paper types.
 
 
 
Reviewing was organised in a two-tier system, with eighteen senior program committee
 
(SPC) members (“area chairs”), who in turn recruited 352 reviewers. The SPC members
 
managed the review process for both the full and short paper submissions: each full paper
 
received at least three reviews, and each short paper received at least two reviews. Similar to
 
recent years, we did not have a face-to-face meeting of the area chairs, instead we held a series
 
of tele-conferences between individual area chairs and the PC chairs. The START conference
 
management system was used to manage paper submissions and the review process—Rich
 
Gerber and the START team gave invaluable help with the system.
 
 
 
Two best paper awards were given at the conference. The senior program committee members
 
for the conference nominated an initial set of papers that were candidates for the awards; the
 
final decisions were then made by a committee chaired by Candace Sidner, and with Hal
 
Daume III, Roland Kuhn, Ryan McDonald, and Mark Steedman as its other members.
 
 
 
Michael Collins, Massachusetts Institute of Technology
 
Shri Narayanan, University of Southern California
 
Douglas W. Oard, University of Maryland
 
Lucy Vanderwende, Microsoft Research
 
 
 
Table 1: Statistics for full paper submissions.
 
 
 
 
 
Area                                                Submissions      Acceptances (Talk)
 
 
 
Sentiment/Information Extraction                    34              8 (24%)
 
Discourse                                            10              4 (40%)
 
Generation/Summarization                            22              2 (9%)
 
Machine learning                                    28              8 (29%)
 
Phonology/Morphology/Language acquisition            14              5 (36%)
 
Semantics                                            32              7 (22%)
 
Syntax                                              33              11 (33%)
 
Machine translation                                  37              13 (35%)
 
Dialog                                                9              3 (33%)
 
IR/Question answering                                16              4 (25%)
 
Large-scale language processing                      11              3 (27%)
 
Speech indexing and retrieval                        1              0 (0%)
 
Speech/Spoken Language Processing Algorithms          9              4 (44%)
 
Speech/Spoken Language Processing Applications        8              3 (38%)
 
 
 
 
 
Area Submissions Acceptances Acceptances
 
(talk) (poster)
 
Sentiment/Information Extraction 22 3 (14%) 4 (18%)
 
Discourse 7 - 2 (29%)
 
Generation/Summarization 10 2 (20%) 3 (30%)
 
Machine Learning 14 2 (14%) 2 (14%)
 
Phonology/Morphology/Language Aquisition 4 - 1 (25%)
 
Semantics 17 4 (24%) -
 
Syntax 16 3 (19%) 4 (25%)
 
Machine Translation 30 8 (27%) 5 (17%)
 
Dialog 11 2 (18%) 3 (27%)
 
IR/Question answering 16 3 (19%) 3 (19%)
 
Large Scale Processing 6 1 (17%) 2 (33%)
 
Speech Indexing and Retrieval 7 5 (71%) -
 
Speech/Spoken Language Algorithms 9 1 (11%) 5 (56%)
 
Speech/Spoken Language Applications 9 1 (11%) 2 (22%)
 
Table 2: Statistics for short paper submissions.
 
2
 
Report from Student Research Workshop/ Doctoral Consortium
 
The Student Research Workshop provided a venue for student researchers investigating topics
 
in the broad fields of Computational Linguistics and Language Technologies to present their
 
work and receive feedback from the community. The workshop was composed of three parallel
 
tracks in Natural Language Processing, Information Retrieval, and Speech. We received a total
 
of 29 submissions (4 IR, 4 Speech, 21 NLP) from 11 countries. Submissions were up from last
 
year, although that was not until after we extended the deadline twice, so we wonder if there
 
may be too many similar venues competing for submissions, such as the ACL SRW and the
 
EACL SRW. Considering the uneven distribution of submissions to the three tracks, the topical
 
organization of the workshop into tracks should also be re-thought.
 
Of the 29 submission, we accepted 9 as oral presentations (one of which withdrew from the
 
workshop) and another 9 as poster presentations. Accepted oral presentations and posters
 
came form 9 different countries. Both oral presentation and poster presentation sessions were
 
scheduled during the main conference; each paper accepted for oral presentation was also
 
given a slot in the poster session. We made a special effort to schedule the sessions at times
 
when many senior people in the field would be able to attend and offer their valuable wisdom
 
from years. A total of 86 students and senior researchers agreed to serve on the program
 
committee, which allowed us to assign 4 to 6 reviewers per paper.
 
During the workshop, each oral presentation was followed by a brief panel discussion by two
 
panelists per paper. Despite the extra effort of having to recruit panelists, we believe that the
 
panels added considerable extra value to the workshop. Not only did it ensure good feedback to
 
the presenters, it also helped the audience put the papers into perspective within the respective
 
research fields. Each of the three oral presentation sessions drew an audience of 30 to 50
 
people.
 
All presenters received financial support from the U.S. National Science Foundation to assist
 
them in their travel to Boulder for the conference. Altogether we received $21,000 from the
 
National Science Foundation to fund the workshop, which included support for student
 
participants, student co-chairs, and the cost of the student lunch. Oral presenters were offered
 
$400 to defray the cost of registration and hotel as well as $500 to cover travel from within North
 
America, or $1000 if they were traveling internationally. Poster presenters were offered $300.00
 
total for reimbursement. We also budgeted a small amount of money for materials, such as
 
poster boards for the poster presentations.
 
At the student lunch on the day of the SRW, we had a group discussion to get feedback from
 
the student community on how the SRW could be improved, and in general what could be done
 
to offer mentoring to the students in our community. One issue that was raised is that it is not
 
very clear what sets the SRW apart from the main conference or exactly what types of
 
submissions are desired. Students felt that some of the feedback they received from reviewers
 
wasn’t consistent with what the call for participation described as the target submissions. In
 
response to this, we may need a more structured review form. One participant in the discussion
 
pointed out that the form used for ACL short papers this year was a particularly good example of
 
how to keep reviewers thinking along the right lines for review. Another issue that was raised is
 
that students are not getting encouragement from their advisors to submit to the SRW. So we
 
may need to go back to the faculty segment of the ACL community to find out why and what we
 
can do about it.
 
Students expressed a desire for more networking opportunities at ACL conferences, especially
 
to help the shyer students come out of their shells. One idea was to organize topic specific
 
round tables where selected faculty would attend, but which would be mainly students
 
interested in similar topics. Other ideas included websites to help students find roommates for
 
conferences and distributing contact information for all people who are registered by a particular
 
date, along with their affiliations and research interests, to help students plan for who they want
 
to try to set up meetings with, etc. during the conference.
 
Student Research Workshop Faculty Chairs and Student Co-Chairs
 
Anoop Sarkar (Faculty Chair, Simon Fraser University)
 
Carolyn Rose (Faculty Chair, CMU)
 
Svetlana Stenchikova (Student Co-Chair, Stony Brook University) - Speech
 
Ulrich Germann (Student Co-Chair, University of Toronto) - NLP
 
Chirag Shah (Student Co-Chair, University of North Carolina) - Information Retrieval
 
Brief Reports from Other NAACL HLT Chairs
 
Publicitiy Chairs
 
 Matthew Stone (Rutgers)
 
 Gokhan Tur (SRI)
 
 Diana Inkpen (U Ottawa)
 
The Publicity Chairs were chosen from each of the three HLT areas -- IR, speech and
 
NLP – in order to ensure good connections to the communities. The forwarded all
 
announcements to mailing lists in their respective fields, including the main conference
 
Call for Papers and Call for Short Papers, the Calls for Workshop and Tutorial Proposals,
 
the Call for Demos, and the Doctoral Consortium Call for Papers. The lists and websites
 
used included: corpora, elsenet, acl, asis-l, linguist, webir, ISCA ISCAPad, IEEE
 
eNewsletter, AI Magazine, and the cognitive science society website.
 
Future organizers should bear in mind that many organizations produce bimonthly or
 
quarterly newsletters for conference announcements (e.g., AAAI's AI magazine, IEEE
 
signal processing speech & language technical committee), which require CFPs to be
 
distributed at least 2-3 months in advance of submission deadlines.
 
Publications Chairs
 
 Christy Doran (MITRE)
 
 Eric Ringger (BYU)
 
At the recommendation of the ACL-HLT 2008 chair, we continued the tradition from
 
ACL 2008 of having two Publication Chairs, which seemed to work well. The Chairs
 
followed the recipe written for publications chairs by Joakim Nivre and Noah Smith,
 
located here:
 
http://stp.lingfil.uu.se/~nivre/how-to-pub.html
 
This includes the updated recipe for using ACLPUB to assemble the actual proceedings:
 
http://faculty.cs.byu.edu/~ringger/naacl09/howto.html
 
Several improvements are in the queue for both documents as well as the ACLPUB tools.
 
Notes for improvement and discussion:
 
o Better publicizing and enforcement of publications-related deadlines.
 
o Mailing lists for relevant subsets of organizing committee, including: PC cochairs,
 
local organizers, workshop chairs and sponsorship chairs (There are no
 
sponsor logos in the proceedings, since the sponsorship chairs did not know about
 
the deadline.)
 
o Improved documentation of pre-requisites for hand-off to OmniPress (e.g. file
 
formats), especially regarding book covers.
 
o Per Jan Hajic, there is an opportunity to integrate some of ACLPUB into START
 
o Numbers of needed printed volumes continue to drop. It may be a good time to
 
consider going digital only.
 
o Recommendations for shared documentation on aclweb.org?
 
Tutorials Chairs
 
 Ciprian Chelba (Google)
 
 Paul Kantor (Rutgers)
 
 Brian Roark (OHSU)
 
The tutorial chairs actively recruited submissions, received 12, and accepted 8. They
 
erred on the side of accepting rather than rejecting because of “ties” in the reviews, and
 
felt that it worked out quite well. Even though 8 tutorials is more than in most years
 
(typically 6), we ended up with sufficient enrollment in all 8 accepted proposals. The
 
complete list of tutorials is given below.
 
1. Data-Intensive Text Processing with MapReduce -- Jimmy Lin and Chris Dyer
 
45 participants
 
2. Distributed Language Models -- Thorsten Brants and Peng Xu
 
32 participants
 
3. Search Algorithms in NLP: Theory and Practice with Dynamic Programming --
 
Liang Huang
 
53 participants
 
4. Extracting world/linguistic knowledge from Wikipedia -- Simone Paolo Ponzetto
 
and Michael Strube
 
34 participants
 
5. OpenFst: An Open-Source, Weighted FST Library -- Martin Jansche/Cyril
 
Allauzen/Michael Riley
 
24 participants
 
6. OntoNotes: The 90% Solution -- Sameer Pradhan and Nianwen Xue
 
12 participants
 
7. VerbNet overview, extensions, mappings and apps -- Karin Kipper Schuler, Anna
 
Korhonen, Susan W. Brown
 
24 participants
 
8. Writing Systems, Transliteration and Decipherment -- Richard Sproat and Kevin
 
Knight
 
20 participants
 
Workshops Chairs
 
 Mark Hasegawa-Johnson (UIUC),
 
 Nizar Habash (Columbia)
 
There were 41 workshop submissions jointly to ACL, EACL, and NAACL. ACL
 
accepted 12, EACL accepted 10, and we accepted 11. Eleven plus the student workshop
 
and CoNLL gave a total of 13 workshops, listed below. Number of participants listed is
 
the final estimate from Priscilla Rasmussen as of May 26. For more information, see
 
http://isle.uiuc.edu/hltnaacl2009/.
 
1. Semantic Evaluations: Recent Achievements and Future Directions
 
Organizers: Eneko Agirre, LluÌs Marquez, Richard Wicentowski
 
42 participants
 
2. BioNLP 2009
 
Organizers: Sophia Ananiadou, K. Bretonnel Cohen, Dina Demner-Fushman,
 
John Pestian, Jun'ichi Tsujii, Bonnie Webber
 
74 participants
 
3. Third International Workshop on Cross Lingual Information Access: Addressing
 
the Information Need of Multilingual Societies
 
Organizers: Sivaji Bandyopadhyay, Pushpak Bhattacharya, Vasudeva Varma,
 
Sudeshna Sarkar, A Kumaran
 
14 participants
 
4. Workshop on Integer Linear Programming for Natural Language Processing
 
Organizers: James Clarke, Sebastian Riedel
 
25 participants
 
5. Software engineering, testing, and quality assurance for natural language
 
processing
 
Organizers: Kevin Bretonnel Cohen, Marc Light
 
34 participants
 
6. Computational Approaches to Linguistic Creativity
 
Organizers: Birte Loenneker-Rodman, Anna Feldman
 
34 participants
 
7. Unsupervised and minimally supervised learning of lexical semantics
 
Organizers: Suresh Manandhar, Ioannis Klapaftis
 
34 participants
 
8. Semi-supervised Learning for NLP
 
Organizers: Qin Wang, Kevin Duh, Dekang Lin
 
75 participants
 
9. Active Learning for NLP
 
Organizers: Eric Ringger, Robbie Haertel, Katrin Tomanek
 
35 participants
 
10. Innovative Use of NLP for Building Educational Applications
 
Organizers: Joel Tetreault, Jill Burstein, Claudia Leacock
 
34 participants
 
11. Third Workshop on Syntax and Structure in Statistical Translation
 
Organizers: Dekai Wu, David Chiang
 
48 participants
 
12. Thirteenth Conference on Computational Natural Language Learning (CoNLL)
 
Organizers: Suzanne Stevenson and Xavier Carreras
 
85 participants
 
The multi-conference proposal system seemed to work very well for all concerned: three
 
workshops that would not have been offered (because they were rejected by their firstchoice
 
conference) were, instead, offered at NAACL. All of the workshops had full
 
schedules, as indicated in the online schedule.
 
Demo Chairs
 
 Fred Popowich (Simon Fraser University)
 
 Michael Johnston (AT&T)
 
Six demos were submitted, and five were accepted. The demo chairs did not actively
 
recruit demos from specific research groups, and relied on the general publicity efforts.
 
Future organizers might consider opening the demos up to allow people who have
 
accepted papers to give demos in the demo session. From attending the demo session, the
 
presenters seemed to be happy with how it went (and they were glad that they were in the
 
high traffic area).
 

Revision as of 12:30, 11 July 2009