-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Number of neurons affects learning #113
Comments
For some extra context on this one, in normal nengo, the actual learning rate parameter used in the learning rule is:
The intent is to have a learning system that behaves about the same if you change dt or n_neurons. So if something takes 3 seconds to learn with dt=0.001, it should also take 3 seconds to learn with dt=0.0001. Similarly, as you increase the number of neurons, the absolute magnitude of the decoder values decreases. So, if I double the number of neurons, I should halve the learning rate. So we decided that nengo should take care of those two scaling factors for you, since we seem to want them all the time. |
It's probably not inconceivable that this is a fix-point problem, I'm currently playing around with a different fix-point format in a different branch. I guess this is more reason to get new_ensemble-new_learning merged ASAP. |
Hi Travis, just to be clear, which branch of Nengo SpiNNaker are you on? On 30 March 2016 at 07:39, Andrew Mundy notifications@github.com wrote:
|
New ensemble new learning! :)
|
Ah, also I just realized that there were some changes in handling the transform in the last week-ish in |
Grrr... it would have been nice if we'd made that nengo change in a way that was backwards compatible.... At least it's just a small refactoring change. @mundya and @neworderofjamie , do you think we should sort out some sort of process for identifying when changes to core nengo break things that are dependent on it? There's starting to be a fair number of these (nengo_gui, nengo_spinnaker, nengo_ocl, nengo_mpi), and mostly we've been handling this by trying to keep such changes to a minimum. This last month or so there's been a sudden increase in these breaking changes because we're trying to get v2.1.0 sorted out. That should just be a temporary spike in changes. But I'm not sure how annoying it has been for you to keep things in sync. |
I believe @mundya has done some updates to support recent changes, but I have had a quick look at the issue and we are dividing the learning rate On 30 March 2016 at 16:56, tcstewar notifications@github.com wrote:
|
One random idea that might help with this: would it make sense to do stochastic rounding (with an LFSR) here? So that instead of values < 0.00003 being rounded to 0, they sometimes get rounded up to 0.00003? Mind you, the stochastic thing turned out to not work so well for the neuron model..... |
@studywolf - we've merged new_ensemble-new_learning into master, which should mean the transform issue is fixed. Do you mind looking to see if you still have the same problem? If so, we can think about how to fiddle with the fixed point. |
Ah cool! I don't actually have access to a spinnaker board at the moment, @tcstewar could you try out the fix? |
Oh wait neither does he, maybe @hunse you could get Brent in on it? |
I can check on Friday when I get back.... :) |
I was playing around with some more learning on spinnaker, and I'm getting some discrepancies between nengo and nengo_spinnaker. Basically it looks like the number of neurons might not be getting taken into account in nengo_spinnaker, in nengo there's a part of the learning rule (which I belieeeeve is just 1/n_neurons) that makes sure that the learning will occur at the same rate no matter how many neurons there are. So that maybe might not be implemented?
One place where this came up was when I tried to increase the number of neurons for learning a more complex function, which resulted in oscillatory behaviour in the learning, but when I dropped the learning rate the change in weights became too small to get picked up by the system and so no learning occurred at all!
Here are some graphs, from nengo:
and then nengo_spinnaker:
I'm not sure what's going on in the second half there, possibly completely unrelated to number of neurons. And actually as I'm looking at it it doesn't really look like the 2000 neuron population is just overshooting after that first bump...maybe things are saturating and then going to hell...I'm not sure. Here's the code I used:
Also I am on the most recent
nengo
and thenew_ensemble-new-learning
branch ofnengo_spinnaker
.The text was updated successfully, but these errors were encountered: