Leveraging Knowledge Bases in LSTMs for Improving Machine Reading

Bishan Yang, Tom Mitchell


Abstract
This paper focuses on how to take advantage of external knowledge bases (KBs) to improve recurrent neural networks for machine reading. Traditional methods that exploit knowledge from KBs encode knowledge as discrete indicator features. Not only do these features generalize poorly, but they require task-specific feature engineering to achieve good performance. We propose KBLSTM, a novel neural model that leverages continuous representations of KBs to enhance the learning of recurrent neural networks for machine reading. To effectively integrate background knowledge with information from the currently processed text, our model employs an attention mechanism with a sentinel to adaptively decide whether to attend to background knowledge and which information from KBs is useful. Experimental results show that our model achieves accuracies that surpass the previous state-of-the-art results for both entity extraction and event extraction on the widely used ACE2005 dataset.
Anthology ID:
P17-1132
Volume:
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
July
Year:
2017
Address:
Vancouver, Canada
Editors:
Regina Barzilay, Min-Yen Kan
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1436–1446
Language:
URL:
https://aclanthology.org/P17-1132
DOI:
10.18653/v1/P17-1132
Bibkey:
Cite (ACL):
Bishan Yang and Tom Mitchell. 2017. Leveraging Knowledge Bases in LSTMs for Improving Machine Reading. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1436–1446, Vancouver, Canada. Association for Computational Linguistics.
Cite (Informal):
Leveraging Knowledge Bases in LSTMs for Improving Machine Reading (Yang & Mitchell, ACL 2017)
Copy Citation:
PDF:
https://aclanthology.org/P17-1132.pdf