-
Notifications
You must be signed in to change notification settings - Fork 109
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ability to specify custom tokenizer #131
Comments
I was thinking about adding ngram support as well. I want to do this by abstracting out tokenizing to a separate public api that can get either called by the classifier, or passed in. I'm not sure which approach would be better. |
Would the dependency injection be a good idea where we create a an instance of the tokenizer and then pass it during the initialization of the classifier, the way we do for the storage backend support. |
In the first post what i described was n-gram based on words, which are also called shingles. However, one can also use letter-based n-grams that are often produce good results while putting a finite upper bound on total memory used (as the maximum possible number of keys would be the number of possible letters raised to the power length of the n-gram), and could be ideal for large collection training. |
Yeah, I think dependency injection is the way to go here. |
I've opened #161 but it should be resolved as a part of this tokenizer issue... Sorry I didn't research this before I open it. |
@piroor Thanks for hopping in! |
Currently, the following code is used to split the document in tokens/words for training and classification.
This covers general case, but there could be situations where the user might want to customize the way document is split into words. For example, tokenizing Japanese text could be a whole different thing. Another situation where a custom tokenizer is needed when the user wants to train the model on N-grams (for example bi-grams such as
New York
). SplittingNew
andYork
fromNew York
would meanNew
will be removed if it is present in stopwords. Similarly,to be or not to be
is another popular example of a significant phrase fully made of common stopwords.. N-grams often play significant role in contextualizing a document and help improve the accuracy of the model in special situations. In many languages (Arabic, Persian, Urdu etc. to name a few) two or more words are combined (they are still separated by space, only put together) to form various linguistic constructs. This could be important if one wants to know who is the author of relatively small piece of text such as those posted on forums.It would be nice if we can pass a
Lambda
as a tokenizer at the time of classifier initialization or some other more expressive means to tell the system how split the text.The text was updated successfully, but these errors were encountered: