Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Running Code with 16GB GPU Instead of Recommended 24GB #6

Open
shp216 opened this issue Apr 18, 2024 · 2 comments
Open

Running Code with 16GB GPU Instead of Recommended 24GB #6

shp216 opened this issue Apr 18, 2024 · 2 comments

Comments

@shp216
Copy link

shp216 commented Apr 18, 2024

Hello! Thanks for your great work.

I came across your project and am very interested in trying it out. However, I noticed in the documentation that it recommends having a 24GB GPU. Currently, I only have access to a 16GB GPU.

I was wondering if there are any adjustments or settings that could be modified to make the code run effectively on a 16GB GPU without significantly compromising performance or output quality.

Could you provide some guidance or tips on how to configure the project for a 16GB environment?
Thank you!

@neil0306
Copy link

same here, if it is available

@tsunghan-wu
Copy link
Owner

Well I'm not sure if I can reduce the memory footprint to 16GB cuz I just leveraged some off-the-shelf models. The only advise I can give is that you can always put the model to CUDA device when you're about to call the function and free them after using it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants