Vectorial Semantic Spaces Do Not Encode Human Judgments of Intervention Similarity

Paola Merlo, Francesco Ackermann


Abstract
Despite their practical success and impressive performances, neural-network-based and distributed semantics techniques have often been criticized as they remain fundamentally opaque and difficult to interpret. In a vein similar to recent pieces of work investigating the linguistic abilities of these representations, we study another core, defining property of language: the property of long-distance dependencies. Human languages exhibit the ability to interpret discontinuous elements distant from each other in the string as if they were adjacent. This ability is blocked if a similar, but extraneous, element intervenes between the discontinuous components. We present results that show, under exhaustive and precise conditions, that one kind of word embeddings and the similarity spaces they define do not encode the properties of intervention similarity in long-distance dependencies, and that therefore they fail to represent this core linguistic notion.
Anthology ID:
K18-1038
Volume:
Proceedings of the 22nd Conference on Computational Natural Language Learning
Month:
October
Year:
2018
Address:
Brussels, Belgium
Editors:
Anna Korhonen, Ivan Titov
Venue:
CoNLL
SIG:
SIGNLL
Publisher:
Association for Computational Linguistics
Note:
Pages:
392–401
Language:
URL:
https://aclanthology.org/K18-1038
DOI:
10.18653/v1/K18-1038
Bibkey:
Cite (ACL):
Paola Merlo and Francesco Ackermann. 2018. Vectorial Semantic Spaces Do Not Encode Human Judgments of Intervention Similarity. In Proceedings of the 22nd Conference on Computational Natural Language Learning, pages 392–401, Brussels, Belgium. Association for Computational Linguistics.
Cite (Informal):
Vectorial Semantic Spaces Do Not Encode Human Judgments of Intervention Similarity (Merlo & Ackermann, CoNLL 2018)
Copy Citation:
PDF:
https://aclanthology.org/K18-1038.pdf