-
Notifications
You must be signed in to change notification settings - Fork 159
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Need help understanding "Train Your Own QA Models" Tutorial #14
Comments
@JayYip or @ash3n should be able to answer that one.
@JayYip or @ash3n but I believe we ran very few epochs. In the single digits. Possibly 1
We could not load the original embeddings into google colab without it crashing.
We have two notebooks, one uses GPT2 to generate, the other is simple retrieval.
The "Float16EmbeddingsExpanded.pkl"
Try this one instead https://github.com/Santosh-Gupta/datasets To see if your text data is trainable, What you can do is train on your data with just the FFNN. Which means encode your texts with Bert (2nd to last layer average of all the context vectors), and have seperate FFNN layers for each of the question and answer embedding. It won't be as good as training the bert weights, but it's much faster and should be able to give you decent results. If these results don't make sense this way, something may be off with your data. If you are using scientific/medical texts, you will want to use scibert or biobert, and then use Bert-as-a-service to batch encode your texts. But if not, I would recommend using tensorflow hub or pytorch hub to mass encode your texts. I especially recommend pytorch's roBerta weights. |
Sorry for the late reply.
I used Titan Xp but I think there's something wrong with it since it still raised OOM even the batch size was set to 1.
We trained for a couple epochs. You can try something between 5-10. |
TensorFlow and Pytorch are not that different in terms of GPU memory. K80 should be fine training 12-layers transformer.
I agree with this point but that will take some work. We need to change the input pipeline from |
True. Maybe for another project. I am actually doing archive manatee which uses the exact same architecture: two tower bert. |
@Santosh-Gupta & @JayYip thanks so much guys. |
@ronykalfarisi @JayYip @JayYip Hey , can you help me out in running the model locally on my machine ? |
@abhijeet201998 The code is tested on Linux machine with Titan Xp GPU. Not 100% sure whether it will work on Windows and MacOS. |
Hi all,
First of all, thank you so much for releasing such a brilliant work. I need your help in understanding the tutorial jupyter-notebook of training our own QA. Before I tried to trained "our own data", I managed to run DocProductPresentation successfully (as well as downloading all necessary files).
To trained our own data, I downloaded the "sampleData.csv" and Train_Your_Own_QA notebook file. When I tried to run the training, I notice that I got OOM error (My GPU is RTX 2070 8GB). So, my first step is to reduce the batch size by half and so on. However, even after I set batch size to 1, I still got OOM error. Therefore, I played a little bit with "bert_config.json" from BioBert pre-train model and change the num_hidden_layers to 6 (default is 12) and it ran. Also, I noticed that you set the num_epochs to 1 so I didn't change it.
Once the training is finished (it took around 35 mins), I used DocProductPresentation notebook to test the new model. However, the result is totally out of the topic from the question I asked. Therefore to test if this new model work as intended, I copied one question from "sampleData.csv" and I still got out of topic answer.
So, my questions are,
Thank you so much for your help.
The text was updated successfully, but these errors were encountered: