Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

When I set the batch size to 2, an error message is printed: Error Occurred!: (512, 512, x, x) #98

Open
benzhangdragonplus opened this issue Jul 2, 2024 · 1 comment

Comments

@benzhangdragonplus
Copy link

benzhangdragonplus commented Jul 2, 2024

I am training on imageslides, and when I set the batch_size=2 in prompts.yaml,
will print error message: Error Occurred!: (512, 512, x, x),but it won't stop training,
and when I set the batch size to 1, it's normal again,why?

Here is my configuration file:

prompts_file: "trainscripts/imagesliders/data/light/prompts.yaml"
pretrained_model:
name_or_path: "D:\zealot\sdxl_v10VAEFix.safetensors" # you can also use .ckpt or .safetensors models
v2: false # true if model is v2.x
v_pred: false # true if model uses v-prediction
network:
type: "c3lier" # or "c3lier" or "lierla"
rank: 4 # ####################
alpha: 1.0
training_method: "full" # full, selfattn, xattn, noxattn, or innoxattn
train:
precision: "bfloat16"
noise_scheduler: "ddim" # or "ddpm", "lms", "euler_a" # ####################
iterations: 1600
lr: 0.0002 # ####################
optimizer: "AdamW" # ####################
lr_scheduler: "constant"
max_denoising_steps: 50
save:
name: "light_temp"
path: "models"
per_steps: 200
precision: "bfloat16"
logging:
use_wandb: false
verbose: false
other:
use_xformers: true

  • target: "" # what word for erasing the positive concept from
    positive: "" # concept to erase
    unconditional: "" # word to take the difference from the positive concept
    neutral: "" # starting point for conditioning the target
    action: "erase" # erase or enhance
    guidance_scale: -1 # ######################
    resolution: 1024
    dynamic_resolution: false
    batch_size: 2
@rohitgandikota
Copy link
Owner

Hey, thanks for the question! We did not configure batch size for image training. Earlier in our experiments using higher batch size was creating memory issues.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants