Augmenting Neural Networks with First-order Logic

Tao Li, Vivek Srikumar


Abstract
Today, the dominant paradigm for training neural networks involves minimizing task loss on a large dataset. Using world knowledge to inform a model, and yet retain the ability to perform end-to-end training remains an open question. In this paper, we present a novel framework for introducing declarative knowledge to neural network architectures in order to guide training and prediction. Our framework systematically compiles logical statements into computation graphs that augment a neural network without extra learnable parameters or manual redesign. We evaluate our modeling strategy on three tasks: machine comprehension, natural language inference, and text chunking. Our experiments show that knowledge-augmented networks can strongly improve over baselines, especially in low-data regimes.
Anthology ID:
P19-1028
Original:
P19-1028v1
Version 2:
P19-1028v2
Volume:
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics
Month:
July
Year:
2019
Address:
Florence, Italy
Editors:
Anna Korhonen, David Traum, Lluís Màrquez
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
292–302
Language:
URL:
https://aclanthology.org/P19-1028
DOI:
10.18653/v1/P19-1028
Bibkey:
Cite (ACL):
Tao Li and Vivek Srikumar. 2019. Augmenting Neural Networks with First-order Logic. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 292–302, Florence, Italy. Association for Computational Linguistics.
Cite (Informal):
Augmenting Neural Networks with First-order Logic (Li & Srikumar, ACL 2019)
Copy Citation:
PDF:
https://aclanthology.org/P19-1028.pdf
Video:
 https://aclanthology.org/P19-1028.mp4
Code
 utahnlp/layer_augmentation
Data
SNLI