Term Co-associations + Topic Modeling
This page contains supplementary material for the paper "Explainable Ensemble Topic Models using Weighted Term Co-associations" (2020).
Summary
Topic modeling is a popular unsupervised technique that is used to discover the latent thematic structure in text corpora. The evaluation of topic models typically involves measuring the semantic coherence of the terms describing each topic, where a single value is often used to summarize the quality of an overall model. However, this can lead to difficulty when one seeks to interpret the strengths and weaknesses of a given model. With this in mind, we propose a new ensemble topic modeling approach that incorporates both stability information, in the form of term co-associations, and semantic coherence information, as derived from a word embedding, constructed on a background corpus. Our evaluations show that this approach can simultaneously yield higher quality models when considering the produced topic descriptors and document-topic assignments, while also facilitating the comparison and evaluation of solutions through the visualization of the discovered topical structure, ordering of the topic descriptors, and further investigation of term pairs.
Word Embeddings
We have created three different pre-trained word embedding models, each constructed using the word2vec Continuous Bag-Of-Words (CBOW) architecture, with 100 dimensions and a window size of 5. These were trained on the following corpora:
- Embedding trained on 15 years of news article data from The Guardian. — Download (218MB)
- Embedding trained using a large collection of Wikipedia long abstracts collected by Qureshi & Greene (2016). — Download (521MB)
- Embedding trained on a corpus of CNN and Daily Mail news articles previously compiled by Hermann et all (2015). — Download (95MB)
Corpora
Pre-processed versions of The Guardian text corpora are made available here for research purposes only.
>> Download pre-processed text corpora (779MB)