Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Number of neurons affects learning #113

Open
studywolf opened this issue Mar 30, 2016 · 13 comments
Open

Number of neurons affects learning #113

studywolf opened this issue Mar 30, 2016 · 13 comments
Assignees

Comments

@studywolf
Copy link

I was playing around with some more learning on spinnaker, and I'm getting some discrepancies between nengo and nengo_spinnaker. Basically it looks like the number of neurons might not be getting taken into account in nengo_spinnaker, in nengo there's a part of the learning rule (which I belieeeeve is just 1/n_neurons) that makes sure that the learning will occur at the same rate no matter how many neurons there are. So that maybe might not be implemented?

One place where this came up was when I tried to increase the number of neurons for learning a more complex function, which resulted in oscillatory behaviour in the learning, but when I dropped the learning rate the change in weights became too small to get picked up by the system and so no learning occurred at all!

Here are some graphs, from nengo:
nengo_learn

and then nengo_spinnaker:
nengo_spinn_learn

I'm not sure what's going on in the second half there, possibly completely unrelated to number of neurons. And actually as I'm looking at it it doesn't really look like the 2000 neuron population is just overshooting after that first bump...maybe things are saturating and then going to hell...I'm not sure. Here's the code I used:

import numpy as np

import nengo
import nengo_spinnaker


output = 0
model = nengo.Network()
with model: 
    node_in = nengo.Node(output=np.sin)
    node_target = nengo.Node(output=lambda t: np.sin(t)**2)

    ens100 = nengo.Ensemble(n_neurons=50, dimensions=1)
    ens1000 = nengo.Ensemble(n_neurons=2000, dimensions=1)
    ens200 = nengo.Ensemble(n_neurons=100, dimensions=1)
    ens2000 = nengo.Ensemble(n_neurons=100, dimensions=1)
    ens_target00 = nengo.Ensemble(n_neurons=100, dimensions=1)
    ens_target000 = nengo.Ensemble(n_neurons=100, dimensions=1)

    nengo.Connection(node_in, ens100)
    nengo.Connection(node_in, ens1000)
    learn_conn00 = nengo.Connection(ens100, ens200, 
            function = lambda x: 0,
            learning_rule_type=nengo.PES(learning_rate=1e-4))
    learn_conn000 = nengo.Connection(ens1000, ens2000, 
            function = lambda x: 0,
            learning_rule_type=nengo.PES(learning_rate=1e-4))

    nengo.Connection(node_target, ens_target00)
    nengo.Connection(node_target, ens_target000)
    nengo.Connection(ens200, ens_target00, transform=-1)
    nengo.Connection(ens2000, ens_target000, transform=-1)

    nengo.Connection(ens_target00, learn_conn00.learning_rule, transform=-1)
    nengo.Connection(ens_target000, learn_conn000.learning_rule, transform=-1)

    probe00 = nengo.Probe(ens200, synapse=.01)
    probe000 = nengo.Probe(ens2000, synapse=.01)
    probe_target = nengo.Probe(node_target)

nengo_spinnaker.add_spinnaker_params(model.config)
model.config[node_in].function_of_time = True
model.config[node_target].function_of_time = True

# sim = nengo_spinnaker.Simulator(model)
sim = nengo.Simulator(model)
sim.run(20.0)
sim.close()

import matplotlib.pyplot as plt
plt.plot(sim.trange(), sim.data[probe00])
plt.plot(sim.trange(), sim.data[probe000])
plt.plot(sim.trange(), sim.data[probe_target])
plt.show()

Also I am on the most recent nengo and the new_ensemble-new-learning branch of nengo_spinnaker.

@tcstewar
Copy link
Contributor

For some extra context on this one, in normal nengo, the actual learning rate parameter used in the learning rule is:

learning_rate * dt / n_neurons

The intent is to have a learning system that behaves about the same if you change dt or n_neurons. So if something takes 3 seconds to learn with dt=0.001, it should also take 3 seconds to learn with dt=0.0001. Similarly, as you increase the number of neurons, the absolute magnitude of the decoder values decreases. So, if I double the number of neurons, I should halve the learning rate. So we decided that nengo should take care of those two scaling factors for you, since we seem to want them all the time.

@mundya
Copy link
Member

mundya commented Mar 30, 2016

It's probably not inconceivable that this is a fix-point problem, I'm currently playing around with a different fix-point format in a different branch. I guess this is more reason to get new_ensemble-new_learning merged ASAP.

@neworderofjamie
Copy link
Contributor

Hi Travis, just to be clear, which branch of Nengo SpiNNaker are you on?

On 30 March 2016 at 07:39, Andrew Mundy notifications@github.com wrote:

It's probably not inconceivable that this is a fix-point problem, I'm
currently playing around with a different fix-point format in a different
branch. I guess this is more reason to get new_ensemble-new_learning merged
ASAP.


You are receiving this because you were assigned.
Reply to this email directly or view it on GitHub
#113 (comment)

@studywolf
Copy link
Author

New ensemble new learning! :)
On Mar 30, 2016 4:14 AM, "neworderofjamie" notifications@github.com wrote:

Hi Travis, just to be clear, which branch of Nengo SpiNNaker are you on?

On 30 March 2016 at 07:39, Andrew Mundy notifications@github.com wrote:

It's probably not inconceivable that this is a fix-point problem, I'm
currently playing around with a different fix-point format in a different
branch. I guess this is more reason to get new_ensemble-new_learning
merged
ASAP.


You are receiving this because you were assigned.
Reply to this email directly or view it on GitHub
<
#113 (comment)


You are receiving this because you authored the thread.
Reply to this email directly or view it on GitHub
#113 (comment)

@studywolf
Copy link
Author

Ah, also I just realized that there were some changes in handling the transform in the last week-ish in nengo and using the actual most recent master version doesn't work with new_ensemble-new-learning. I was actually using this one version of nengo: nengo/nengo@5128e51, the next commit breaks things.

@tcstewar
Copy link
Contributor

Grrr... it would have been nice if we'd made that nengo change in a way that was backwards compatible.... At least it's just a small refactoring change.

@mundya and @neworderofjamie , do you think we should sort out some sort of process for identifying when changes to core nengo break things that are dependent on it? There's starting to be a fair number of these (nengo_gui, nengo_spinnaker, nengo_ocl, nengo_mpi), and mostly we've been handling this by trying to keep such changes to a minimum. This last month or so there's been a sudden increase in these breaking changes because we're trying to get v2.1.0 sorted out. That should just be a temporary spike in changes. But I'm not sure how annoying it has been for you to keep things in sync.

@neworderofjamie
Copy link
Contributor

I believe @mundya has done some updates to support recent changes, but
new_ensemble-new_learning is still awaiting merging with that/master.

I have had a quick look at the issue and we are dividing the learning rate
passed to Nengo by the total number of neurons in the ensemble. I think the
problem here is, as Andrew suggested, that we're hitting the bottom of
fixed-point accuracy i.e. 1×10⁻⁴ / 2000 < less than the
least-significant-bit of our current representation (0.00003ish which for a
2000 neuron corresponds to a learning rate of 6×10⁻²). Hopefully @Mundy's
current work on switching to a more accurate fixed point format will help a
bit but there is still going to be a limit to how low a learning rate we
can handle unless we do something mega-funky.

On 30 March 2016 at 16:56, tcstewar notifications@github.com wrote:

Grrr... it would have been nice if we'd made that nengo change in a way
that was backwards compatible.... At least it's just a small refactoring
change.

@mundya https://github.com/mundya and @neworderofjamie
https://github.com/neworderofjamie , do you think we should sort out
some sort of process for identifying when changes to core nengo break
things that are dependent on it? There's starting to be a fair number of
these (nengo_gui, nengo_spinnaker, nengo_ocl, nengo_mpi), and mostly we've
been handling this by trying to keep such changes to a minimum. This last
month or so there's been a sudden increase in these breaking changes
because we're trying to get v2.1.0 sorted out. That should just be a
temporary spike in changes. But I'm not sure how annoying it has been for
you to keep things in sync.


You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub
#113 (comment)

@tcstewar
Copy link
Contributor

One random idea that might help with this: would it make sense to do stochastic rounding (with an LFSR) here? So that instead of values < 0.00003 being rounded to 0, they sometimes get rounded up to 0.00003?

Mind you, the stochastic thing turned out to not work so well for the neuron model.....

@mundya
Copy link
Member

mundya commented Jun 22, 2016

@studywolf - we've merged new_ensemble-new_learning into master, which should mean the transform issue is fixed. Do you mind looking to see if you still have the same problem? If so, we can think about how to fiddle with the fixed point.

@studywolf
Copy link
Author

Ah cool! I don't actually have access to a spinnaker board at the moment, @tcstewar could you try out the fix?

@studywolf
Copy link
Author

Oh wait neither does he, maybe @hunse you could get Brent in on it?

@hunse
Copy link
Contributor

hunse commented Jun 22, 2016

@bjkomer?

@tcstewar
Copy link
Contributor

I can check on Friday when I get back.... :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants