-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The preprocessing pipeline used for training #8
Comments
Hi!just a quick answer to show that I check the issues. I'll get back to
you asap
Le ven. 11 nov. 2022 à 1:43 PM, Anton Alekseev ***@***.***> a
écrit :
… Dear colleague,
thank you for your work!
May I wonder -- what is the right way to use the lemmatizer/PoS tagger?
Which pie tokenizer or other preprocessing steps should be used (for best
quality)?
Here's my minimal working example. Is this *exactly the same pipeline*
you have used on training and evaluation stages?
# coding: utf-8
from pie.tagger import Taggerfrom pie.tagger import simple_tokenizerfrom pie.utils import model_spec
device, batch_size, model_file = "cpu", 4, "../models/lasla-plus-lemma.tar"data = "Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. " \
"Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum."
tagger = Tagger(device=device, batch_size=batch_size)
for model, tasks in model_spec(model_file):
tagger.add_model(model, *tasks)
sents, lengths = [], []
for sentence in simple_tokenizer(data):
sents.append(sentence)
lengths.append(len(sentence))
tagged, tasks = tagger.tag(sents=sents, lengths=lengths)
print("Tagged:", tagged)print("Tasks:", tasks)
Thank you in advance.
Best regards,
Anton.
—
Reply to this email directly, view it on GitHub
<#8>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAOXEZVPWXFSMQYKDIUKRY3WHY5OBANCNFSM6AAAAAAR5SD7ZY>
.
You are receiving this because you are subscribed to this thread.Message
ID: ***@***.***>
|
If you just wish to tag, your best shot is https://github.com/hipster-philology/nlp-pie-taggers where I introduced all of the preprocessing AND the post-processing (specifically for enclitics like -que, -ve, -ne) Preprocessing (on the top of my head):
|
Hi, thank you for the swift response! I'm afraid I am going to abuse your kindness once again and ask a few more questions a bit later -- after I take a closer look at nlp-pie-taggers. Thanks! |
Sure, feel free ! |
Dear colleague,
thank you for your work!
May I wonder -- what is the right way to use the lemmatizer/PoS tagger? Which
pie
tokenizer or other preprocessing steps should be used (for best quality)?Here's my minimal working example. Is this exactly the same pipeline you have used on training and evaluation stages?
Thank you in advance.
Best regards,
Anton.
The text was updated successfully, but these errors were encountered: