Lawrence S. Moss

Also published as: Lawrence Moss


2023

pdf bib
Curing the SICK and Other NLI Maladies
Aikaterini-Lida Kalouli | Hai Hu | Alexander F. Webb | Lawrence S. Moss | Valeria de Paiva
Computational Linguistics, Volume 49, Issue 1 - March 2023

Against the backdrop of the ever-improving Natural Language Inference (NLI) models, recent efforts have focused on the suitability of the current NLI datasets and on the feasibility of the NLI task as it is currently approached. Many of the recent studies have exposed the inherent human disagreements of the inference task and have proposed a shift from categorical labels to human subjective probability assessments, capturing human uncertainty. In this work, we show how neither the current task formulation nor the proposed uncertainty gradient are entirely suitable for solving the NLI challenges. Instead, we propose an ordered sense space annotation, which distinguishes between logical and common-sense inference. One end of the space captures non-sensical inferences, while the other end represents strictly logical scenarios. In the middle of the space, we find a continuum of common-sense, namely, the subjective and graded opinion of a “person on the street.” To arrive at the proposed annotation scheme, we perform a careful investigation of the SICK corpus and we create a taxonomy of annotation issues and guidelines. We re-annotate the corpus with the proposed annotation scheme, utilizing four symbolic inference systems, and then perform a thorough evaluation of the scheme by fine-tuning and testing commonly used pre-trained language models on the re-annotated SICK within various settings. We also pioneer a crowd annotation of a small portion of the MultiNLI corpus, showcasing that it is possible to adapt our scheme for annotation by non-experts on another NLI corpus. Our work shows the efficiency and benefits of the proposed mechanism and opens the way for a careful NLI task refinement.

2021

pdf bib
NeuralLog: Natural Language Inference with Joint Neural and Logical Reasoning
Zeming Chen | Qiyue Gao | Lawrence S. Moss
Proceedings of *SEM 2021: The Tenth Joint Conference on Lexical and Computational Semantics

Deep learning (DL) based language models achieve high performance on various benchmarks for Natural Language Inference (NLI). And at this time, symbolic approaches to NLI are receiving less attention. Both approaches (symbolic and DL) have their advantages and weaknesses. However, currently, no method combines them in a system to solve the task of NLI. To merge symbolic and deep learning methods, we propose an inference framework called NeuralLog, which utilizes both a monotonicity-based logical inference engine and a neural network language model for phrase alignment. Our framework models the NLI task as a classic search problem and uses the beam search algorithm to search for optimal inference paths. Experiments show that our joint logic and neural inference system improves accuracy on the NLI task and can achieve state-of-art accuracy on the SICK and MED datasets.

pdf bib
Proceedings of the 1st and 2nd Workshops on Natural Logic Meets Machine Learning (NALOMA)
Aikaterini-Lida Kalouli | Lawrence S. Moss
Proceedings of the 1st and 2nd Workshops on Natural Logic Meets Machine Learning (NALOMA)

2020

pdf bib
MonaLog: a Lightweight System for Natural Language Inference Based on Monotonicity
Hai Hu | Qi Chen | Kyle Richardson | Atreyee Mukherjee | Lawrence S. Moss | Sandra Kuebler
Proceedings of the Society for Computation in Linguistics 2020

pdf bib
OCNLI: Original Chinese Natural Language Inference
Hai Hu | Kyle Richardson | Liang Xu | Lu Li | Sandra Kübler | Lawrence Moss
Findings of the Association for Computational Linguistics: EMNLP 2020

Despite the tremendous recent progress on natural language inference (NLI), driven largely by large-scale investment in new datasets (e.g.,SNLI, MNLI) and advances in modeling, most progress has been limited to English due to a lack of reliable datasets for most of the world’s languages. In this paper, we present the first large-scale NLI dataset (consisting of ~56,000 annotated sentence pairs) for Chinese called the Original Chinese Natural Language Inference dataset (OCNLI). Unlike recent attempts at extending NLI to other languages, our dataset does not rely on any automatic translation or non-expert annotation. Instead, we elicit annotations from native speakers specializing in linguistics. We follow closely the annotation protocol used for MNLI, but create new strategies for eliciting diverse hypotheses. We establish several baseline results on our dataset using state-of-the-art pre-trained models for Chinese, and find even the best performing models to be far outpaced by human performance (~12% absolute performance gap), making it a challenging new resource that we hope will help to accelerate progress in Chinese NLU. To the best of our knowledge, this is the first human-elicited MNLI-style corpus for a non-English language.

2019

pdf bib
Proceedings of the Sixth Workshop on Natural Language and Computer Science
Robin Cooper | Valeria de Paiva | Lawrence S. Moss
Proceedings of the Sixth Workshop on Natural Language and Computer Science

2017

pdf bib
A Monotonicity Calculus and Its Completeness
Thomas Icard | Lawrence Moss | William Tune
Proceedings of the 15th Meeting on the Mathematics of Language

2014

pdf bib
Recent Progress on Monotonicity
Thomas F. Icard III | Lawrence S. Moss
Linguistic Issues in Language Technology, Volume 9, 2014 - Perspectives on Semantic Representations for Textual Inference

This paper serves two purposes. It is a summary of much work concerning formal treatments of monotonicity and polarity in natural language, and it also discusses connections to related work on exclusion relations, and connections to psycholinguistics and computational linguistics. The second part of the paper presents a summary of some new work on a formal Monotonicity Calculus.

1993

pdf bib
A Unification-Based Parser for Relational Grammar
David E. Johnson | Adam Meyers | Lawrence S. Moss
31st Annual Meeting of the Association for Computational Linguistics

1986

pdf bib
Boolean Semantics for Natural Language
Lawrence Moss
Computational Linguistics. Formerly the American Journal of Computational Linguistics, Volume 12, Number 4, October-December 1986