A general notebook outlining an example of how to process and build prediction models for NLP classification tasks, in this case, a subjectivity dataset. Also includes word embeddings and deep learning models + interpretation tutorial using pytorch.
You can use any binary classification dataset (in theory). For the subjectivity dataset go to https://nlp.stanford.edu/~sidaw/home/projects:nbsvm . I also added a Markdown version since the highlights didn't show up in the final part of the notebook. The markdown atleasts has a marker to indicate the highlights.