Fine Tuning
#170
Replies: 1 comment
-
GLiNER is indeed a great tool. I opened an issue awhile ago related to memory forgetfulness while training, so please see this issue where Urchade has suggest a training solution: The NER pile data he refers to can be found here: |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hello Urchade,
Thank you for this great work. It's really amazing how powerful GliNER is. However, I am trying to fine-tune it using your code provided on GitHub. My data is 118 long, which is quite small. I tried to fine-tune it on GLiNER Medium; however, the performance of the model dropped a lot when testing.
I am not sure if it's due to the small dataset, since in the example you provided, the data length used was about 20k. Or maybe the training procedure is relying more on the new data that we want to fine-tune and dropping out its previous knowledge or weights.
My aim is to fine-tune the model on a specific domain named entities, for example, crime reports, but I don’t have a lot of data to use for training that's why I am searching for a pretrained model so it can do the work by finetuning.
Beta Was this translation helpful? Give feedback.
All reactions