How do I continue training SiamRPN after epoch 10? #428
-
What do I need to change in the config? Currently I have However I get "ValueError: loaded state dict contains a parameter group that doesn't match the size of optimizer's group". I've checked other threads with this issue but nothing works. Did I set up the config correctly? Do I need to resume from checkpoint 10 or should that be the PRETRAINED setting? |
Beta Was this translation helpful? Give feedback.
Replies: 5 comments 3 replies
-
Also I have TRAIN.START_EPOCH: 10 |
Beta Was this translation helpful? Give feedback.
-
Also, is it normal for the model to stop training once it finishes epoch 10? |
Beta Was this translation helpful? Give feedback.
-
Actually, two stage do not share optimizers(stage 2 adds more trainable parameters). So the optimizer's group doesn't match. You can set TRAIN.START_EPOCH: 10 to resume training. |
Beta Was this translation helpful? Give feedback.
-
duplicate #80 |
Beta Was this translation helpful? Give feedback.
-
I did see #80 but I'm wondering if it's normal for the script to stop after the 10 epoch? |
Beta Was this translation helpful? Give feedback.
duplicate #80