Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The model is unstable #22

Open
mlgbmhl opened this issue Nov 2, 2021 · 1 comment
Open

The model is unstable #22

mlgbmhl opened this issue Nov 2, 2021 · 1 comment

Comments

@mlgbmhl
Copy link

mlgbmhl commented Nov 2, 2021

Hello! Thanks for sharing the source code and it helped me a lot.
I have attempted to reproduce the results by your code while there are some problems.

In the training and testing process in each fold, the model can get expected test results around 0.83 for acc.
I find the model is unstable and train/test loss are still sharply changing in every epoch and even tend to rise. Same problem was found in train/test acc. For example:

      [141001] training loss: 0.310 training batch acc 0.875000
      0.7935779816513762
      test loss = 251.06089854240417
      [142001] training loss: 0.313 training batch acc 0.843750
      0.8119266055045872
      test loss = 251.3142318725586
      [143001] training loss: 0.310 training batch acc 0.875000
      0.7798165137614679
      test loss = 243.6745239496231
      [144001] training loss: 0.313 training batch acc 0.906250
      0.8027522935779816
      test loss = 255.16645431518555
      [145001] training loss: 0.313 training batch acc 0.953125
      0.8027522935779816
      test loss = 243.88370209932327
      [146001] training loss: 0.315 training batch acc 0.875000
      0.8027522935779816
      test loss = 257.25346076488495
      [147001] training loss: 0.314 training batch acc 0.875000
      0.8027522935779816
      test loss = 258.0545355081558
      [148001] training loss: 0.310 training batch acc 0.843750
      0.7889908256880734
      test loss = 253.85232359170914
      [149001] training loss: 0.312 training batch acc 0.796875
      0.7981651376146789
      test loss = 250.11881053447723
      [150001] training loss: 0.310 training batch acc 0.890625
      0.7935779816513762
      test loss = 259.8563167452812
      [151001] training loss: 0.309 training batch acc 0.843750
      0.7935779816513762
      test loss = 257.9779593348503
      [152001] training loss: 0.311 training batch acc 0.890625
      0.8073394495412844
      Best accuracy for window 128 and fold 1 = 0.8394495412844036 at epoch = 27000

Is it appropriate to choose the best test result in some epoch or the final test result after all epoches as the model test acc even if it is still sharply changing?

@GMLB1997
Copy link

In my implementation, even the highest accuracy cannot achieve 0.83. I wonder what is the case in other folds in your implementation? Can you always get the highest accuracy of 0.83?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants