Exploiting Domain Knowledge via Grouped Weight Sharing with Application to Text Categorization

Ye Zhang, Matthew Lease, Byron C. Wallace


Abstract
A fundamental advantage of neural models for NLP is their ability to learn representations from scratch. However, in practice this often means ignoring existing external linguistic resources, e.g., WordNet or domain specific ontologies such as the Unified Medical Language System (UMLS). We propose a general, novel method for exploiting such resources via weight sharing. Prior work on weight sharing in neural networks has considered it largely as a means of model compression. In contrast, we treat weight sharing as a flexible mechanism for incorporating prior knowledge into neural models. We show that this approach consistently yields improved performance on classification tasks compared to baseline strategies that do not exploit weight sharing.
Anthology ID:
P17-2024
Volume:
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
Month:
July
Year:
2017
Address:
Vancouver, Canada
Editors:
Regina Barzilay, Min-Yen Kan
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
155–160
Language:
URL:
https://aclanthology.org/P17-2024
DOI:
10.18653/v1/P17-2024
Bibkey:
Cite (ACL):
Ye Zhang, Matthew Lease, and Byron C. Wallace. 2017. Exploiting Domain Knowledge via Grouped Weight Sharing with Application to Text Categorization. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 155–160, Vancouver, Canada. Association for Computational Linguistics.
Cite (Informal):
Exploiting Domain Knowledge via Grouped Weight Sharing with Application to Text Categorization (Zhang et al., ACL 2017)
Copy Citation:
PDF:
https://aclanthology.org/P17-2024.pdf
Video:
 https://aclanthology.org/P17-2024.mp4
Data
MPQA Opinion Corpus