SemEval 2017 Task 9: Abstract Meaning Representation Parsing and Generation

Event Notification Type: 
Call for Participation
Abbreviated Title: 
SemEval 2017-Task 9
Contact Email: 
Contact: 
Jonathan May
Submission Deadline: 
Monday, 16 January 2017

Abstract Meaning Representation Parsing is being done at
SemEval again this year, but this time the stakes are higher and the challenge greater. Generation from AMR is a brand new task at SemEval and the stakes are, well, just as high.
In the first subtask, given a sentence of biomedical English text, parsing systems
will attempt to produce an accurate semantic interpretation, as
represented by the AMR standard. In the second subtask, given an AMR representing a sentence of plain Englihs, generation systems will attempt to recover that sentence.

Overview
========

Abstract Meaning Representation (AMR) is a compact, readable,
whole-sentence semantic annotation. Annotation components include
entity identification and typing, PropBank semantic roles, individual
entities playing multiple roles, entity grounding via wikification, as
well as treatments of modality, negation, etc. Parsers have shown us what they can do on news/forum text and we'd now like to see how they handle scientific literature. Additionally, we'd like to be able to generate natural language from an AMR.

Rules
=====

Participants will be provided with about 40,000 sentences of parallel English-AMR training
data from the news/forum domain and about 6,000 sentence from the biomedical domain. In the first subtask they will have to parse new English biomedical data and return the obtained
AMRs. In the second subtask they will have to generate sentences from parses of news/forum English sentences. Participants may use any resources at their disposal (but may
not hand-annotate the blind data or hire other human beings to
hand-annotate the blind data). The SemEval parsing trophy goes to the parsing system
with the highest Smatch score. The SemEval generation trophy goes to the generation system with the best human judgement score. Note that the subtasks are independent and you may submit to either or both tracks.

Trophy?
=======

Yes. The winning system in each subtask will receive bragging rights as well as a
trophy, courtesy of the task organizers.

Key Dates
=========

Training Data release: Now!
Evaluation start: January 9, 2017
Evaluation end: January 16, 2017
Paper submission due: February 27, 2017
SemEval workshop: Summer 2017