Pulling Out the Stops: Rethinking Stopword Removal for Topic Models

Alexandra Schofield, Måns Magnusson, David Mimno


Abstract
It is often assumed that topic models benefit from the use of a manually curated stopword list. Constructing this list is time-consuming and often subject to user judgments about what kinds of words are important to the model and the application. Although stopword removal clearly affects which word types appear as most probable terms in topics, we argue that this improvement is superficial, and that topic inference benefits little from the practice of removing stopwords beyond very frequent terms. Removing corpus-specific stopwords after model inference is more transparent and produces similar results to removing those words prior to inference.
Anthology ID:
E17-2069
Volume:
Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers
Month:
April
Year:
2017
Address:
Valencia, Spain
Editors:
Mirella Lapata, Phil Blunsom, Alexander Koller
Venue:
EACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
432–436
Language:
URL:
https://aclanthology.org/E17-2069
DOI:
Bibkey:
Cite (ACL):
Alexandra Schofield, Måns Magnusson, and David Mimno. 2017. Pulling Out the Stops: Rethinking Stopword Removal for Topic Models. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 432–436, Valencia, Spain. Association for Computational Linguistics.
Cite (Informal):
Pulling Out the Stops: Rethinking Stopword Removal for Topic Models (Schofield et al., EACL 2017)
Copy Citation:
PDF:
https://aclanthology.org/E17-2069.pdf