-
Notifications
You must be signed in to change notification settings - Fork 86
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reproducing TGIF-QA performance #12
Comments
Hi @prote376, We have the TGIF-QA frameQA config available here: https://github.com/jayleicn/ClipBERT/blob/main/src/configs/tgif_qa_frameqa_base_resnet50.json. If I remember correctly, we trained with this config on 4 GPUs. If you are using a different number of GPUs, there might be some performance difference. Best, |
Thank you for quick reply. At first, I used the config file with 4 GPUs. Next, I found that some parameters were different to the parameters in the paper.
So I tried with the parameters in the paper. Could you check the setting is correct if you have time? |
I have added my coauthor Linjie @linjieli222 who conducted this experiment. Hi Linjie, Could you help us to verify the configurations? Thanks! Best, |
Hi, I got the batch size mismatch under the 'action' setting. The probabaly problem I found in the code maybe is you concate the question and the options in 'n_options' times, which cause the different batch sizes with the visual embeddings. The related codes are in src/dataset/dataset_video_qa.py: Can you help me with this problem? |
Hello. Thank you for releasing this code.
Following the setting mentioned in the paper and the process on this github, I am trying to reproduce the TGIF-QA frameQA task.
However, I can't get the performance in the paper although I have tried many different settings.
It seems like that parameters in my config file are different to the original.
Could you share the config file for the TGIF-QA frameQA task you used?
The text was updated successfully, but these errors were encountered: