After the corpus is created and saved, create a notebook (src/notebook/) where we explore the features of the corpus.
On a helper script (src/papers/data/), load the dataset, and create a matrix for words frequencies, in the preprocessed corpus and in the original one.
expose functions to return this matrices and its properties.
In the Jupyter notebook, explore the matrices. Which words are the most frequent in each one?
Feel free to explore and display more properties of the datasets.