You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I came across your project and am very interested in trying it out. However, I noticed in the documentation that it recommends having a 24GB GPU. Currently, I only have access to a 16GB GPU.
I was wondering if there are any adjustments or settings that could be modified to make the code run effectively on a 16GB GPU without significantly compromising performance or output quality.
Could you provide some guidance or tips on how to configure the project for a 16GB environment?
Thank you!
The text was updated successfully, but these errors were encountered:
Well I'm not sure if I can reduce the memory footprint to 16GB cuz I just leveraged some off-the-shelf models. The only advise I can give is that you can always put the model to CUDA device when you're about to call the function and free them after using it.
Hello! Thanks for your great work.
I came across your project and am very interested in trying it out. However, I noticed in the documentation that it recommends having a 24GB GPU. Currently, I only have access to a 16GB GPU.
I was wondering if there are any adjustments or settings that could be modified to make the code run effectively on a 16GB GPU without significantly compromising performance or output quality.
Could you provide some guidance or tips on how to configure the project for a 16GB environment?
Thank you!
The text was updated successfully, but these errors were encountered: