You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Jul 10, 2021. It is now read-only.
I noticed that when I weight my training examples non-uniformly when training a regressor type MLP, my validation error becomes an order of magnitude better than my training error. Digging into the code it looks like the loss function purposely doesn't account for the weights when calculating the validation error. Given that my stop condition for training my neural network is that the validation error is steady for some number of iterations, I would have thought the validation error should be calculating with weights. Is there a specific reason it's not?
if mode == 'train': loss += processor(Xb, yb, wb if wb is not None else 1.0) else: loss += processor(Xb, yb) count += 1
See lines 322-325 of backend/lasagne/mlp.py
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
No description provided.
The text was updated successfully, but these errors were encountered: