You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am training on imageslides, and when I set the batch_size=2 in prompts.yaml,
will print error message: Error Occurred!: (512, 512, x, x),but it won't stop training,
and when I set the batch size to 1, it's normal again,why?
Here is my configuration file:
prompts_file: "trainscripts/imagesliders/data/light/prompts.yaml"
pretrained_model:
name_or_path: "D:\zealot\sdxl_v10VAEFix.safetensors" # you can also use .ckpt or .safetensors models
v2: false # true if model is v2.x
v_pred: false # true if model uses v-prediction
network:
type: "c3lier" # or "c3lier" or "lierla"
rank: 4 # ####################
alpha: 1.0
training_method: "full" # full, selfattn, xattn, noxattn, or innoxattn
train:
precision: "bfloat16"
noise_scheduler: "ddim" # or "ddpm", "lms", "euler_a" # ####################
iterations: 1600
lr: 0.0002 # ####################
optimizer: "AdamW" # ####################
lr_scheduler: "constant"
max_denoising_steps: 50
save:
name: "light_temp"
path: "models"
per_steps: 200
precision: "bfloat16"
logging:
use_wandb: false
verbose: false
other:
use_xformers: true
target: "" # what word for erasing the positive concept from
positive: "" # concept to erase
unconditional: "" # word to take the difference from the positive concept
neutral: "" # starting point for conditioning the target
action: "erase" # erase or enhance
guidance_scale: -1 # ######################
resolution: 1024
dynamic_resolution: false
batch_size: 2
The text was updated successfully, but these errors were encountered:
Hey, thanks for the question! We did not configure batch size for image training. Earlier in our experiments using higher batch size was creating memory issues.
I am training on imageslides, and when I set the batch_size=2 in prompts.yaml,
will print error message: Error Occurred!: (512, 512, x, x),but it won't stop training,
and when I set the batch size to 1, it's normal again,why?
Here is my configuration file:
prompts_file: "trainscripts/imagesliders/data/light/prompts.yaml"
pretrained_model:
name_or_path: "D:\zealot\sdxl_v10VAEFix.safetensors" # you can also use .ckpt or .safetensors models
v2: false # true if model is v2.x
v_pred: false # true if model uses v-prediction
network:
type: "c3lier" # or "c3lier" or "lierla"
rank: 4 # ####################
alpha: 1.0
training_method: "full" # full, selfattn, xattn, noxattn, or innoxattn
train:
precision: "bfloat16"
noise_scheduler: "ddim" # or "ddpm", "lms", "euler_a" # ####################
iterations: 1600
lr: 0.0002 # ####################
optimizer: "AdamW" # ####################
lr_scheduler: "constant"
max_denoising_steps: 50
save:
name: "light_temp"
path: "models"
per_steps: 200
precision: "bfloat16"
logging:
use_wandb: false
verbose: false
other:
use_xformers: true
positive: "" # concept to erase
unconditional: "" # word to take the difference from the positive concept
neutral: "" # starting point for conditioning the target
action: "erase" # erase or enhance
guidance_scale: -1 # ######################
resolution: 1024
dynamic_resolution: false
batch_size: 2
The text was updated successfully, but these errors were encountered: