You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have two PCs each with 4090 GPU. I want to finetune Paligemma by google with these two GPUs. It is a multi node distributed training. Can you guide me how is it? Which one is supported Model parallelization or Data parallelization? How is it?
Motivation
Enabling Distributed Training
Your contribution
I wrote some code using Jax but it doesn't do the job, it stops without error. I can share code.
The text was updated successfully, but these errors were encountered:
Feature request
I have two PCs each with 4090 GPU. I want to finetune Paligemma by google with these two GPUs. It is a multi node distributed training. Can you guide me how is it? Which one is supported Model parallelization or Data parallelization? How is it?
Motivation
Enabling Distributed Training
Your contribution
I wrote some code using Jax but it doesn't do the job, it stops without error. I can share code.
The text was updated successfully, but these errors were encountered: