-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Question about reproduce SlowFast+NL verb model #24
Comments
@wkw1259 Which model do you use? The best model is in epoch 3, which gets 23.39 on recall@5. Could you confirm if this is the issue? |
|
This refers to the recall computation. As in if a particular event has 10 verb annotation, you would only consider those appearing twice as the ground-truth and discard others. The macro_th_9 refers to the overall frequency in the dataset. So if a verb doesn't appear at least 9 times in the entire dataset, it wouldn't be used for the macro computation because these classes would be very noisy. |
Clear, thanks for your detailed explanation |
Thanks for release the relevant code! But I'm having trouble in reproducing the model, and I'd like to ask for advice.
I‘m trying to reproduce the SlowFast+NL verb model by myself. But I found that the results I reproduced are much different from the reported results, for example, you report the valid recall@5 is 23.38, but I can only get 16.46, but the Acc@1 is rasing from 46.79 to 50.66.
I conduct the experiment on 4 V100 gpus, and the cuda version is 10.1.
To reproduce the SlowFast+NL verb model, I directly copy the bash code from the provided log file:
python main_dist.py vbonly_sfast_kpret_10Feb20 --train.bs=8 --train.bsv=8 --train.nw=8 --train.nwv=8 --task_type=vb --mdl.mdl_name=sf_base --mdl.sf_mdl_name=slow_fast_nl_r50_8x8 --debug_mode=False --train.save_mdl_epochs=True --train.resume=False --mdl.load_sf_pretrained=True
Here is the detailed evaluation results:
And here is the training log:
I do not change any other source code from the github. Is there any possible reason why this problem happened?I don't think it's the problem of the dataset not being well downloaded, because the model works fine under the train/valid split file restrictions.
The text was updated successfully, but these errors were encountered: