A study of attention-based neural machine translation model on Indian languages

Ayan Das, Pranay Yerra, Ken Kumar, Sudeshna Sarkar


Abstract
Neural machine translation (NMT) models have recently been shown to be very successful in machine translation (MT). The use of LSTMs in machine translation has significantly improved the translation performance for longer sentences by being able to capture the context and long range correlations of the sentences in their hidden layers. The attention model based NMT system (Bahdanau et al., 2014) has become the state-of-the-art, performing equal or better than other statistical MT approaches. In this paper, we wish to study the performance of the attention-model based NMT system (Bahdanau et al., 2014) on the Indian language pair, Hindi and Bengali, and do an analysis on the types or errors that occur in case when the languages are morphologically rich and there is a scarcity of large parallel training corpus. We then carry out certain post-processing heuristic steps to improve the quality of the translated statements and suggest further measures that can be carried out.
Anthology ID:
W16-3717
Volume:
Proceedings of the 6th Workshop on South and Southeast Asian Natural Language Processing (WSSANLP2016)
Month:
December
Year:
2016
Address:
Osaka, Japan
Editors:
Dekai Wu, Pushpak Bhattacharyya
Venue:
WSSANLP
SIG:
Publisher:
The COLING 2016 Organizing Committee
Note:
Pages:
163–172
Language:
URL:
https://aclanthology.org/W16-3717
DOI:
Bibkey:
Cite (ACL):
Ayan Das, Pranay Yerra, Ken Kumar, and Sudeshna Sarkar. 2016. A study of attention-based neural machine translation model on Indian languages. In Proceedings of the 6th Workshop on South and Southeast Asian Natural Language Processing (WSSANLP2016), pages 163–172, Osaka, Japan. The COLING 2016 Organizing Committee.
Cite (Informal):
A study of attention-based neural machine translation model on Indian languages (Das et al., WSSANLP 2016)
Copy Citation:
PDF:
https://aclanthology.org/W16-3717.pdf