Morphological Inflection Generation with Hard Monotonic Attention

Roee Aharoni, Yoav Goldberg


Abstract
We present a neural model for morphological inflection generation which employs a hard attention mechanism, inspired by the nearly-monotonic alignment commonly found between the characters in a word and the characters in its inflection. We evaluate the model on three previously studied morphological inflection generation datasets and show that it provides state of the art results in various setups compared to previous neural and non-neural approaches. Finally we present an analysis of the continuous representations learned by both the hard and soft (Bahdanau, 2014) attention models for the task, shedding some light on the features such models extract.
Anthology ID:
P17-1183
Volume:
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
July
Year:
2017
Address:
Vancouver, Canada
Editors:
Regina Barzilay, Min-Yen Kan
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2004–2015
Language:
URL:
https://aclanthology.org/P17-1183
DOI:
10.18653/v1/P17-1183
Bibkey:
Cite (ACL):
Roee Aharoni and Yoav Goldberg. 2017. Morphological Inflection Generation with Hard Monotonic Attention. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2004–2015, Vancouver, Canada. Association for Computational Linguistics.
Cite (Informal):
Morphological Inflection Generation with Hard Monotonic Attention (Aharoni & Goldberg, ACL 2017)
Copy Citation:
PDF:
https://aclanthology.org/P17-1183.pdf
Poster:
 P17-1183.Poster.pdf
Code
 roeeaharoni/morphological-reinflection