-
Notifications
You must be signed in to change notification settings - Fork 157
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
loss does not converge #125
Comments
Hi can I see the tensorboad graph? |
Have you solved this problem? I also have this problem, looking forward to your reply, if it is not solved we can communicate a bit. |
I'm very sorry that I haven't solved it yet. I asked one of my classmates to try this project and encountered the same problem, which has not been solved yet. |
Very sorry I didn't have tensorboard installed. The loss fluctuates from 1.5 to 3.5 from the first epoch to the end, so there is nothing wrong with the loss curve. I asked one of my classmates to try this project and encountered the same problem, and the following person also encountered the same problem. I'm a little troubled. |
Well if you used dexined without changing the tensorboad part, maybe you have the data, you just need to see the graph. We need to see if there is an improvement by epochs. The labels In edge detection are very sensitive, in some training samples may be detected more edges than in the GT and you'll find diferen loss value but that does not mean dexined is not training. You should check the average loss of each epoch. It happen in DL based edge detectors. |
How many training data do you have? |
I used the data set BIPED in the project in the training process, and the training set contains 200 pictures. My loss dropped to 0.9 in the 34th round, but it rose to 2.1 in the 35th round, and the overall loss was around 2. |
I read that your experiment in the paper was performed 150K times. I don't know if it is because I trained too few times. I trained for 100 rounds and found that there was no improvement. |
Thank you for your reply, I am using BIPEDv2 data, 200 training images. The training loss is similar to the loss function curve of the previous layer reply, and there is no tendency to converge. |
Did you change some hyperparameters? May you check with my lightweight model? |
Yes, I changed some hyperparameters. I didn't change any hyperparameters on the first run, but found that the loss did not converge, I think it was a problem that the learning rate dropped too quickly, and then I modified the hyperparameters. First, is_testing=False; Then I modified the learning rate to drop 10x every 20 rounds. |
|
Yes is useless |
Here the lenna from de fused module then the average one. Results from LDC |
The effect looks very good. I would like to ask what is the final convergence loss of the BIPED dataset when you used the LDC model to train it. The catloss provided in the code I use |
Sorry I don't have access to my former lab, and I cannot take it. But I let you know whenever a have it |
Hi, i have problem in training process of Pytorch version.I made no changes to the project and used the original BIPEDv2 dataset for training, and the parameters used the default parameters of the project. After training for 17 epochs, the loss barely changes. In the end, it can predict the image, but the effect is not very good. What could be the reason? Looking forward to your reply.
The text was updated successfully, but these errors were encountered: