Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error in loading state_dict of the pre-trained the model #3

Open
ramy-bat opened this issue Aug 12, 2021 · 5 comments
Open

Error in loading state_dict of the pre-trained the model #3

ramy-bat opened this issue Aug 12, 2021 · 5 comments

Comments

@ramy-bat
Copy link

Dear all,
By loading the state_dict, the attached error is faced:
Screenshot from 2021-08-12 15-42-52

It seems that the pre-trained model uses another normalization than the original one.

Thanks in advance for your quick help

@LavenderLA
Copy link
Collaborator

Thanks for your interest in our work. We've update the pre-trained model link in README.md.
BTW, if you'd like to reproduce the results in our paper, please add --refine to use refine blocks. Otherwise you'll encounter error like "Unexpected key(s) in state_dict: 'refine_block...'".

@ramy-bat
Copy link
Author

Thanks for your quick response.
I have added the refine (--refine) but the problem seems that the keys of the model are renamed differently as the following:
Screenshot_2

it works with --refine after replacing the names self.gn1, self.gn2 and self.gn3 with self.bn1, self.bn2 and self.bn3 in model\flot\gconv.py respectively.
I am not quite sure which update you mean, but I have downloaded your pre-trained model again and it shows me new error seems to be generated from different torch version: :
Screenshot_3
However, the old model works correctly now after the aforementioned modifications.

@weiyithu
Copy link
Owner

Thanks for your interest. We have tested the new model and it works fine. Which torch version do you use ? I wonder it may be caused by different torch version. Also, you should check that whether you can reproduce our result with your aforementioned modifications under this torch version.

@gengshi
Copy link

gengshi commented Jan 24, 2022

@ramy-bat the OrdictDict on typing problem is because of the python version. Changing the python version to python3.8 works for me. hope to work for you

@gengshi
Copy link

gengshi commented Jan 24, 2022

@ramy-bat if you dont want to run on python3.8, try install python3.8 on your local PC (mac for me) and install pytorch-cpu, then convert the state_dict to collections.OrderedDict, since this API is available for both <=python3.6 and >=python3.7, then save and load the ckpt in python3.6 or lower, an example:

run in python3.8:

  import torch
  from collections import OrderedDict
  ckpt = torch.load('best_checkpoint.params', 'cpu')
  ckpt['state_dict'] =  OrderedDict(ckpt['state_dict'])
  torch.save(ckpt, 'best_checkpoint.params.py36')

then on your python3.6 or lower version, just load the converted best_checkpoint.params.py36 for the PV-RAFT refine

The above solution works for me

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants