Neural Particle Smoothing for Sampling from Conditional Sequence Models

Chu-Cheng Lin, Jason Eisner


Abstract
We introduce neural particle smoothing, a sequential Monte Carlo method for sampling annotations of an input string from a given probability model. In contrast to conventional particle filtering algorithms, we train a proposal distribution that looks ahead to the end of the input string by means of a right-to-left LSTM. We demonstrate that this innovation can improve the quality of the sample. To motivate our formal choices, we explain how neural transduction models and our sampler can be viewed as low-dimensional but nonlinear approximations to working with HMMs over very large state spaces.
Anthology ID:
N18-1085
Volume:
Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)
Month:
June
Year:
2018
Address:
New Orleans, Louisiana
Editors:
Marilyn Walker, Heng Ji, Amanda Stent
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
929–941
Language:
URL:
https://aclanthology.org/N18-1085
DOI:
10.18653/v1/N18-1085
Bibkey:
Cite (ACL):
Chu-Cheng Lin and Jason Eisner. 2018. Neural Particle Smoothing for Sampling from Conditional Sequence Models. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 929–941, New Orleans, Louisiana. Association for Computational Linguistics.
Cite (Informal):
Neural Particle Smoothing for Sampling from Conditional Sequence Models (Lin & Eisner, NAACL 2018)
Copy Citation:
PDF:
https://aclanthology.org/N18-1085.pdf
Note:
 N18-1085.Notes.pdf