You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello. Thank you for releasing this amazing work. I am attempting to perform MultiModal Instruction Tuning on the pretrained-8b SEED-LLaMA model. However, I have not found any detailed hyperparameters in your paper or the GitHub README. Could you share a summary of the instruction tuning hyperparameters for SEED-LLaMA similar to those provided for the pretraining step?
The text was updated successfully, but these errors were encountered:
Hello. Thank you for releasing this amazing work. I am attempting to perform MultiModal Instruction Tuning on the pretrained-8b SEED-LLaMA model. However, I have not found any detailed hyperparameters in your paper or the GitHub README. Could you share a summary of the instruction tuning hyperparameters for SEED-LLaMA similar to those provided for the pretraining step?
The text was updated successfully, but these errors were encountered: