-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error in loading state_dict of the pre-trained the model #3
Comments
Thanks for your interest in our work. We've update the pre-trained model link in README.md. |
Thanks for your interest. We have tested the new model and it works fine. Which torch version do you use ? I wonder it may be caused by different torch version. Also, you should check that whether you can reproduce our result with your aforementioned modifications under this torch version. |
@ramy-bat the OrdictDict on typing problem is because of the python version. Changing the python version to python3.8 works for me. hope to work for you |
@ramy-bat if you dont want to run on python3.8, try install python3.8 on your local PC (mac for me) and install pytorch-cpu, then convert the state_dict to collections.OrderedDict, since this API is available for both <=python3.6 and >=python3.7, then save and load the ckpt in python3.6 or lower, an example: run in python3.8:
then on your python3.6 or lower version, just load the converted The above solution works for me |
Dear all,
By loading the state_dict, the attached error is faced:
It seems that the pre-trained model uses another normalization than the original one.
Thanks in advance for your quick help
The text was updated successfully, but these errors were encountered: