-
Notifications
You must be signed in to change notification settings - Fork 254
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How do I train dualstyleGAN to carry inference out on the entire body? #55
Comments
you need to train a StyleGAN on full body images or use a pre-trained one |
for full body train,i have two question: |
|
i find this issue ,so i think this is model's error ,when i pip install torch-utils,and have this error magic_number = pickle_module.load(f, **pickle_load_args) |
I see... |
you can retrain your own rosinality's stylegan2-pytorch on your target dataset. You mean that I need to train a stylegan model using my own whole body dataset(Anime full body picture) under the stylegan2-pytorch project,The existing pretrain model is not used,in other words, 【you need to train a StyleGAN on full body images or use a pre-trained one if i use rosinality's stylegan2-pytorch trained a body stylegan, at least how many datasets(full body cartoon images) do i need,and how long time , GPUs count? |
Please refer to stylegan for the amount, time and GPU. |
May I can directly train the anime full body figure StyleGAN and finally use it for DualStyleGAN? So I don't have to run Step 2: Fine-tune StyleGAN. Fine-tune StyleGAN in distributed settings? |
I don't know which one is better. |
Just on a 3090 test 1024 x1024 can not run, the subsequent modification of the resolution to 256 x256 to run,I feel like I'm training too hard,It is not certain that stylegan-human pretrained conversion is easy to implement |
I found a conversion code(https://github.com/dvschultz/stylegan2-ada-pytorch/blob/main/export_weights.py), but the missing parameters after the conversion, is it because only the generated parameters (G_ema parameters) are converted, other parameters (G and D parameters are not converted)? |
You can simply load G with G_ema's parameters. |
thank you very much for your help,I have used G_ema conversion code to achieve the conversion of G, now only the parameter conversion of D is left, I have also asked the question, waiting for the reply, can I convert by myself (refer to G_ema code), I am not sure whether it is difficult for me? On another question, I have two questions that I don't quite understand. Could you please give me some guidance,How to use pix2pixHD to stylize the whole image I found he was generating the image based on the mask |
新年快乐,大佬,我想知道我假如训练了一个stylegan 的生成全身人物模型,那么接下来该如何继续训练,因为我发现后续有很多与人脸相关模型(Psp 模型 encoder 以及 (人脸相关的模型)https://github.com/TreB1eN/InsightFace_Pytorch ,以及后续人脸检测以及对齐操作),这些都如何处理呢,期待大佬解疑 |
Any clues on how to train/modify the input so that I can carry out inference on the entire body. I'm planning to use this on something like DCT-Net here
I have tried to resize the image to 1024/1024 but using a resized image gives some other results. Do I need to train it on different images or do I need to make changes to the code itself?
The text was updated successfully, but these errors were encountered: