Gleb Gusev


2022

pdf bib
Uncertainty Estimation of Transformer Predictions for Misclassification Detection
Artem Vazhentsev | Gleb Kuzmin | Artem Shelmanov | Akim Tsvigun | Evgenii Tsymbalov | Kirill Fedyanin | Maxim Panov | Alexander Panchenko | Gleb Gusev | Mikhail Burtsev | Manvel Avetisian | Leonid Zhukov
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

Uncertainty estimation (UE) of model predictions is a crucial step for a variety of tasks such as active learning, misclassification detection, adversarial attack detection, out-of-distribution detection, etc. Most of the works on modeling the uncertainty of deep neural networks evaluate these methods on image classification tasks. Little attention has been paid to UE in natural language processing. To fill this gap, we perform a vast empirical investigation of state-of-the-art UE methods for Transformer models on misclassification detection in named entity recognition and text classification tasks and propose two computationally efficient modifications, one of which approaches or even outperforms computationally intensive methods.

pdf bib
Towards Computationally Feasible Deep Active Learning
Akim Tsvigun | Artem Shelmanov | Gleb Kuzmin | Leonid Sanochkin | Daniil Larionov | Gleb Gusev | Manvel Avetisian | Leonid Zhukov
Findings of the Association for Computational Linguistics: NAACL 2022

Active learning (AL) is a prominent technique for reducing the annotation effort required for training machine learning models. Deep learning offers a solution for several essential obstacles to deploying AL in practice but introduces many others. One of such problems is the excessive computational resources required to train an acquisition model and estimate its uncertainty on instances in the unlabeled pool. We propose two techniques that tackle this issue for text classification and tagging tasks, offering a substantial reduction of AL iteration duration and the computational overhead introduced by deep acquisition models in AL. We also demonstrate that our algorithm that leverages pseudo-labeling and distilled models overcomes one of the essential obstacles revealed previously in the literature. Namely, it was shown that due to differences between an acquisition model used to select instances during AL and a successor model trained on the labeled data, the benefits of AL can diminish. We show that our algorithm, despite using a smaller and faster acquisition model, is capable of training a more expressive successor model with higher performance.

2017

pdf bib
Riemannian Optimization for Skip-Gram Negative Sampling
Alexander Fonarev | Oleksii Grinchuk | Gleb Gusev | Pavel Serdyukov | Ivan Oseledets
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

Skip-Gram Negative Sampling (SGNS) word embedding model, well known by its implementation in “word2vec” software, is usually optimized by stochastic gradient descent. However, the optimization of SGNS objective can be viewed as a problem of searching for a good matrix with the low-rank constraint. The most standard way to solve this type of problems is to apply Riemannian optimization framework to optimize the SGNS objective over the manifold of required low-rank matrices. In this paper, we propose an algorithm that optimizes SGNS objective using Riemannian optimization and demonstrates its superiority over popular competitors, such as the original method to train SGNS and SVD over SPPMI matrix.