-
Notifications
You must be signed in to change notification settings - Fork 52
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Some question about the inference #16
Comments
Hi, you may change |
Thanks for your quick reply. I think that the 'soft nms' or the 'nms' may not cause a big gap but it is possible. The trained model in the first test dataset(A) obtained 58% mAP , but only obtained 52% mAP in the second test dataset(B). And I used the same model to test the second test dataset(B) again, only 41% mAP. And I would like to know if there are dropped mAP cases randomly in your experiments. Thanks! |
It's a bit weird to see the random drop in performance. I never experienced this in my experiments. But I saw the performance drop when I used |
Thanks for your reply, and thanks for your nice work again. I will conduct some experiments later. |
Thanks for your nice work~ but I met some questions when I used this model in the customer dataset.
I trained a nice model that achieved satisfying mAP in the validation dataset and the first test dataset (A). But in the second test dataset(B), the trained VfNet performed poorly. And I found that test results were different via many tests. for example, the output json file of the first test was 15MB, the output json file of the second test was 20MB, and so on. I still don't know what caused that. I guessed that may be the multi-scale test?
That's my config:
Looking for your reply, Thanks!
The text was updated successfully, but these errors were encountered: