Nicole Peinelt


2021

pdf bib
GiBERT: Enhancing BERT with Linguistic Information using a Lightweight Gated Injection Method
Nicole Peinelt | Marek Rei | Maria Liakata
Findings of the Association for Computational Linguistics: EMNLP 2021

Large pre-trained language models such as BERT have been the driving force behind recent improvements across many NLP tasks. However, BERT is only trained to predict missing words – either through masking or next sentence prediction – and has no knowledge of lexical, syntactic or semantic information beyond what it picks up through unsupervised pre-training. We propose a novel method to explicitly inject linguistic information in the form of word embeddings into any layer of a pre-trained BERT. When injecting counter-fitted and dependency-based embeddings, the performance improvements on multiple semantic similarity datasets indicate that such information is beneficial and currently missing from the original model. Our qualitative analysis shows that counter-fitted embedding injection is particularly beneficial, with notable improvements on examples that require synonym resolution.

2020

pdf bib
tBERT: Topic Models and BERT Joining Forces for Semantic Similarity Detection
Nicole Peinelt | Dong Nguyen | Maria Liakata
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics

Semantic similarity detection is a fundamental task in natural language understanding. Adding topic information has been useful for previous feature-engineered semantic similarity models as well as neural models for other tasks. There is currently no standard way of combining topics with pretrained contextual representations such as BERT. We propose a novel topic-informed BERT-based architecture for pairwise semantic similarity detection and show that our model improves performance over strong neural baselines across a variety of English language datasets. We find that the addition of topics to BERT helps particularly with resolving domain-specific cases.

2019

pdf bib
Aiming beyond the Obvious: Identifying Non-Obvious Cases in Semantic Similarity Datasets
Nicole Peinelt | Maria Liakata | Dong Nguyen
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics

Existing datasets for scoring text pairs in terms of semantic similarity contain instances whose resolution differs according to the degree of difficulty. This paper proposes to distinguish obvious from non-obvious text pairs based on superficial lexical overlap and ground-truth labels. We characterise existing datasets in terms of containing difficult cases and find that recently proposed models struggle to capture the non-obvious cases of semantic similarity. We describe metrics that emphasise cases of similarity which require more complex inference and propose that these are used for evaluating systems for semantic similarity.

2017

pdf bib
ClassifierGuesser: A Context-based Classifier Prediction System for Chinese Language Learners
Nicole Peinelt | Maria Liakata | Shu-Kai Hsieh
Proceedings of the IJCNLP 2017, System Demonstrations

Classifiers are function words that are used to express quantities in Chinese and are especially difficult for language learners. In contrast to previous studies, we argue that the choice of classifiers is highly contextual and train context-aware machine learning models based on a novel publicly available dataset, outperforming previous baselines. We further present use cases for our database and models in an interactive demo system.

2016

pdf bib
Automatically Suggesting Example Sentences of Near-Synonyms for Language Learners
Chieh-Yang Huang | Nicole Peinelt | Lun-Wei Ku
Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: System Demonstrations

In this paper, we propose GiveMeExample that ranks example sentences according to their capacity of demonstrating the differences among English and Chinese near-synonyms for language learners. The difficulty of the example sentences is automatically detected. Furthermore, the usage models of the near-synonyms are built by the GMM and Bi-LSTM models to suggest the best elaborative sentences. Experiments show the good performance both in the fill-in-the-blank test and on the manually labeled gold data, that is, the built models can select the appropriate words for the given context and vice versa.