Emergent Predication Structure in Hidden State Vectors of Neural Readers

Hai Wang, Takeshi Onishi, Kevin Gimpel, David McAllester


Abstract
A significant number of neural architectures for reading comprehension have recently been developed and evaluated on large cloze-style datasets. We present experiments supporting the emergence of “predication structure” in the hidden state vectors of these readers. More specifically, we provide evidence that the hidden state vectors represent atomic formulas 𝛷c where 𝛷 is a semantic property (predicate) and c is a constant symbol entity identifier.
Anthology ID:
W17-2604
Volume:
Proceedings of the 2nd Workshop on Representation Learning for NLP
Month:
August
Year:
2017
Address:
Vancouver, Canada
Editors:
Phil Blunsom, Antoine Bordes, Kyunghyun Cho, Shay Cohen, Chris Dyer, Edward Grefenstette, Karl Moritz Hermann, Laura Rimell, Jason Weston, Scott Yih
Venue:
RepL4NLP
SIG:
SIGREP
Publisher:
Association for Computational Linguistics
Note:
Pages:
26–36
Language:
URL:
https://aclanthology.org/W17-2604
DOI:
10.18653/v1/W17-2604
Bibkey:
Cite (ACL):
Hai Wang, Takeshi Onishi, Kevin Gimpel, and David McAllester. 2017. Emergent Predication Structure in Hidden State Vectors of Neural Readers. In Proceedings of the 2nd Workshop on Representation Learning for NLP, pages 26–36, Vancouver, Canada. Association for Computational Linguistics.
Cite (Informal):
Emergent Predication Structure in Hidden State Vectors of Neural Readers (Wang et al., RepL4NLP 2017)
Copy Citation:
PDF:
https://aclanthology.org/W17-2604.pdf
Data
CBTChildren's Book TestWho-did-What