-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
High-dimensional ensembles seem to be weird #130
Comments
A hopefully useful summary of the above:
|
Thank you for all the detail! Note to self, or @tcstewar if you want to try - what happens if the vector is [0, 0, ..., 1] and supplied using a |
Hmm... I just tried this, and the output is actually correct. :( |
I'm wondering if running the nodes on host is adding additional noise to debugging this problem - largely because the place and route solution isn't (currently) necessarily repeatable, consequently where the network is placed will affect how reliably data from the host gets into the simulation. That said I'll investigate this a little later today. |
Here's a minimal(ish) example that exhibits some strange behaviour:
This model is a simple large-vector input being fed into one ensemble, from which we decode out a smaller-dimensional function which we send to a passthrough Node and probe it.
Using branch dev0716, if we run this with 100 neurons, we run into this slicing problem:
To fix this, we adjust
lif.py
to adjust the memory padding:For the examples run on this page, I've set the 0.9 and 0.8 values to 0.5 and 0.5.
Running this model with different parameters produces a variety of results.
If I run the model as it is (n_neurons=80, fixed_input=False, function_of_time=False, aligned_vector=False) then the system is passing in the input during runtime and I get a whole bunch of watchdog errors:
If I give it a fixed input (n_neurons=80, fixed_input=True, function_of_time=False, aligned_vector=False), then the value gets precomputed, rolled into the bias, and it works great:
But, if I now increase the number of neurons to 100 (n_neurons=100, fixed_input=True, function_of_time=False, aligned_vector=False), it runs but gives an incorrect result (much much smaller than it should be):
Bizarrely, if I set aligned_vector to True, it now works fine at both 80 and 100 neurons. This is bizarre, since the only change is that the desired output is [1,0,0,0,0,0,...] instead of a randomly chosen 32-dimensional unit vector.
(n_neurons=100, fixed_input=True, function_of_time=False, aligned_vector=False)
(n_neurons=80, fixed_input=True, function_of_time=False, aligned_vector=False)
Now let's see what happens if we set the input to be a function_of_time Node.
The function_of_time approach works great with 100 neurons and aligned vectors:
(n_neurons=100, fixed_input=False, function_of_time=True, aligned_vector=True)
If we don't go with an aligned_vector, we get the same problem as above:
(n_neurons=100, fixed_input=False, function_of_time=True, aligned_vector=False)
But now if I go down to 80 neurons, it dies with a watchdog error:
(n_neurons=80, fixed_input=False, function_of_time=True, aligned_vector=False)
The text was updated successfully, but these errors were encountered: