Hierarchical Convolutional Attention Networks for Text Classification

Shang Gao, Arvind Ramanathan, Georgia Tourassi


Abstract
Recent work in machine translation has demonstrated that self-attention mechanisms can be used in place of recurrent neural networks to increase training speed without sacrificing model accuracy. We propose combining this approach with the benefits of convolutional filters and a hierarchical structure to create a document classification model that is both highly accurate and fast to train – we name our method Hierarchical Convolutional Attention Networks. We demonstrate the effectiveness of this architecture by surpassing the accuracy of the current state-of-the-art on several classification tasks while being twice as fast to train.
Anthology ID:
W18-3002
Volume:
Proceedings of the Third Workshop on Representation Learning for NLP
Month:
July
Year:
2018
Address:
Melbourne, Australia
Editors:
Isabelle Augenstein, Kris Cao, He He, Felix Hill, Spandana Gella, Jamie Kiros, Hongyuan Mei, Dipendra Misra
Venue:
RepL4NLP
SIG:
SIGREP
Publisher:
Association for Computational Linguistics
Note:
Pages:
11–23
Language:
URL:
https://aclanthology.org/W18-3002
DOI:
10.18653/v1/W18-3002
Bibkey:
Cite (ACL):
Shang Gao, Arvind Ramanathan, and Georgia Tourassi. 2018. Hierarchical Convolutional Attention Networks for Text Classification. In Proceedings of the Third Workshop on Representation Learning for NLP, pages 11–23, Melbourne, Australia. Association for Computational Linguistics.
Cite (Informal):
Hierarchical Convolutional Attention Networks for Text Classification (Gao et al., RepL4NLP 2018)
Copy Citation:
PDF:
https://aclanthology.org/W18-3002.pdf