Skip to content

Latest commit

 

History

History
39 lines (33 loc) · 1.49 KB

README.md

File metadata and controls

39 lines (33 loc) · 1.49 KB

Text-Prediction

This program implements a statistical trigram language model with NLTK for text prediction based on the Alice in Wonderland corpus.

Getting started

  1. clone or download this repository
git clone https://github.com/jadessechan/Text-Prediction.git
  1. run main.py
  2. once prompted by the program, enter a phrase related to the corpus

Demo

Lines 80-86 display n-gram statistics of the corpus and are commented-out by default.

Here is a frequency distribution plot of the most common 30 trigrams: frequency distribution of the top 30 trigrams

Here is an example of the program output: demo image of running program

final output of demo:

User input: alice said to the
Prediction: alice said to the table, half hoping she might find another (comma was added for readability)
What did alice want to find again?? The suspense...😖

Implementation

I used NLTK's probability library to store the probability of each predicted word,

ConditionalFreqDist()

then the program picks from a weighted random probability to decide which prediction to append to the given phrase.

random.choices()

The user decides when to stop the program by choosing whether or not to predict the next word.

"Do you want to generate another word? (type 'y' for yes or 'n' for no): "