You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm currently running a HPO on my binary classification model using randomized search CV. The first split, it takes 38.1 GB out of the available 40.1 GB from my A100 GPU (provided by Google Colab). However, when the code goes to train the model using the second split, I notice that the GPU RAM consumption never decreased so I was hit with a CUDA Out of Memory error.
I notice that there's no parameters to set the model to be trained/loaded into cpu, gpu, or cuda like XGBoost's library. Is CUDA the default? Is there any way to offload the model from CUDA/GPU to CPU?
The text was updated successfully, but these errors were encountered:
Hi,
there is a trainer_kwargs argument in the fit method where one can specify the device the model is being trained on.
We will also release a new version in the coming days with built in HPO possibilities and a more efficient Mamba Version then the current pytorch one.
I'm currently running a HPO on my binary classification model using randomized search CV. The first split, it takes 38.1 GB out of the available 40.1 GB from my A100 GPU (provided by Google Colab). However, when the code goes to train the model using the second split, I notice that the GPU RAM consumption never decreased so I was hit with a CUDA Out of Memory error.
I notice that there's no parameters to set the model to be trained/loaded into cpu, gpu, or cuda like XGBoost's library. Is CUDA the default? Is there any way to offload the model from CUDA/GPU to CPU?
The text was updated successfully, but these errors were encountered: