Accelerating Neural Transformer via an Average Attention Network

Biao Zhang, Deyi Xiong, Jinsong Su


Abstract
With parallelizable attention networks, the neural Transformer is very fast to train. However, due to the auto-regressive architecture and self-attention in the decoder, the decoding procedure becomes slow. To alleviate this issue, we propose an average attention network as an alternative to the self-attention network in the decoder of the neural Transformer. The average attention network consists of two layers, with an average layer that models dependencies on previous positions and a gating layer that is stacked over the average layer to enhance the expressiveness of the proposed attention network. We apply this network on the decoder part of the neural Transformer to replace the original target-side self-attention model. With masking tricks and dynamic programming, our model enables the neural Transformer to decode sentences over four times faster than its original version with almost no loss in training time and translation performance. We conduct a series of experiments on WMT17 translation tasks, where on 6 different language pairs, we obtain robust and consistent speed-ups in decoding.
Anthology ID:
P18-1166
Volume:
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
July
Year:
2018
Address:
Melbourne, Australia
Editors:
Iryna Gurevych, Yusuke Miyao
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1789–1798
Language:
URL:
https://aclanthology.org/P18-1166
DOI:
10.18653/v1/P18-1166
Bibkey:
Cite (ACL):
Biao Zhang, Deyi Xiong, and Jinsong Su. 2018. Accelerating Neural Transformer via an Average Attention Network. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1789–1798, Melbourne, Australia. Association for Computational Linguistics.
Cite (Informal):
Accelerating Neural Transformer via an Average Attention Network (Zhang et al., ACL 2018)
Copy Citation:
PDF:
https://aclanthology.org/P18-1166.pdf
Poster:
 P18-1166.Poster.pdf
Code
 bzhangXMU/transformer-aan
Data
WMT 2014