-
Notifications
You must be signed in to change notification settings - Fork 27
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Expose ability to set learning rate for models #157
Comments
Turns out that |
Moving context from slack into issue: there was a bug where couple helpful links:
|
Running with an updated version of PTL (1.7.5) yields the following error
which looks like it has to do with https://github.com/Lightning-AI/lightning/pull/8501/files It's possible that the Backbone Finetuning fix to resume training from a checkpoint broke auto_lr_find? It may be taking the auto_lr_find checkpoint incorrectly as the model checkpoint since it calls
|
Confirmed that if we override that in
However, this is not the best fix as it sacrifices being able to resume training from a model checkpoint and have the optimizers load correctly. Filed Lightning-AI/pytorch-lightning#14674 |
Right now, the only options is to use
auto_lr_find
or not. If that is False, default learning rate is 0.001. We should let users specify this learning rate.ZambaVideoClassificationLightningModule
has an lr param: https://github.com/drivendataorg/zamba/blob/master/zamba/pytorch_lightning/utils.py#L143 but this can't be set via configsThe text was updated successfully, but these errors were encountered: