Modeling Confidence in Sequence-to-Sequence Models

Jan Niehues, Ngoc-Quan Pham


Abstract
Recently, significant improvements have been achieved in various natural language processing tasks using neural sequence-to-sequence models. While aiming for the best generation quality is important, ultimately it is also necessary to develop models that can assess the quality of their output. In this work, we propose to use the similarity between training and test conditions as a measure for models’ confidence. We investigate methods solely using the similarity as well as methods combining it with the posterior probability. While traditionally only target tokens are annotated with confidence measures, we also investigate methods to annotate source tokens with confidence. By learning an internal alignment model, we can significantly improve confidence projection over using state-of-the-art external alignment tools. We evaluate the proposed methods on downstream confidence estimation for machine translation (MT). We show improvements on segment-level confidence estimation as well as on confidence estimation for source tokens. In addition, we show that the same methods can also be applied to other tasks using sequence-to-sequence models. On the automatic speech recognition (ASR) task, we are able to find 60% of the errors by looking at 20% of the data.
Anthology ID:
W19-8671
Volume:
Proceedings of the 12th International Conference on Natural Language Generation
Month:
October–November
Year:
2019
Address:
Tokyo, Japan
Editors:
Kees van Deemter, Chenghua Lin, Hiroya Takamura
Venue:
INLG
SIG:
SIGGEN
Publisher:
Association for Computational Linguistics
Note:
Pages:
575–583
Language:
URL:
https://aclanthology.org/W19-8671
DOI:
10.18653/v1/W19-8671
Bibkey:
Cite (ACL):
Jan Niehues and Ngoc-Quan Pham. 2019. Modeling Confidence in Sequence-to-Sequence Models. In Proceedings of the 12th International Conference on Natural Language Generation, pages 575–583, Tokyo, Japan. Association for Computational Linguistics.
Cite (Informal):
Modeling Confidence in Sequence-to-Sequence Models (Niehues & Pham, INLG 2019)
Copy Citation:
PDF:
https://aclanthology.org/W19-8671.pdf